Log files in /var/log/postgres
are only for your information (in a standard installation) and do not serve the system itself, especially not "ACID compliancy". They need to be writable once configured, that's all the server needs.
There are a number of settings in your postgresql.conf
that govern what is logged. Most of them can also be set at the command line to overrule the setting in the config file. We are not talking about WAL files, which are not usually placed in /var/log/postgres
.
I would drastically reduce the verbosity in your case. Among others, I would set (and reload the server):
log_statement = none
Depending how you import the data, you probably need to do more than that. Look at:
log_min_messages
To skip writing log files altogether you can set:
log_destination = 'stderr'
...
More in the manual.
Yes, the "switch or option" that would allow you to do this is the one your host doesn't want to change. If there were a way for a user to simply work around resource limits, there wouldn't be much point in resource limits.
It's not obvious from the documentation, but the source code makes it fairly clear that this is a limit on the number of queries you can run, per hour, that change almost anything at all about the server. The use of the word "update" is unfortunate, because this resource limit impacts any query that does something like...
INSERT UPDATE DELETE TRUNCATE LOAD OPTIMIZE ANALYZE REPAIR GRANT REVOKE
... as well as any valid pairing of these:
[ CREATE | DROP | ALTER ] [ TABLE | TRIGGER | VIEW | PROCEDURE | FUNCTION | EVENT ]
If you take a look inside the dump file, it should become readily apparent that you're doing quite a lot of all those things.
But aside from the dump, for a minute, what sort of tiny little service will you be able to host on a server that only lets you execute 36,000/hour ... 600/minute ... 10/sec queries that INSERT
/UPDATE
/DELETE
max, averaged over an hour? That's tiny... so you have either purchased something really small and inexpensive... or your host has already oversubscribed the server and is trying to mitigate that problem... or your data has already caused more loading than the host anticipated, so you earned a special extra "feature."
The most plausible workaround, to me, seems like it will be to pace your restore by imposing a small delay between each query that gets issued to the server, to stay within your time limit.
cat FILENAME | perl -MTime::HiRes -pe '$| = 1; Time::HiRes::usleep(100000)' | mysql -u USERNAME -p PASSWORD --default-character-set=utf8 DATABASE_NAME
You can see this in action without actually applying it to the server by just trying this:
cat FILENAME | perl -MTime::HiRes -pe '$| = 1; Time::HiRes::usleep(100000)'
You should see the dump file coming at you, one line at a time, use control-C
to break out. The value 100000 is 100000 microseconds, or 1/10th of one second, which is approximately your limit. The server won't be executing the queries instantaneously, as it will require some number of milliseconds to process each one, so setting the value at 100000 will probably work. Also, there is some white space in mysqldump
files, and statements like CREATE TABLE
will be broken out on multiple lines, even though they'll only count as a single query, which will stretch out the time a bit, also.
Remember, though, that resource limits are per-user, so you can't be doing anything else on the server at the same time with the same account, or the two sessions, together, will combine to push you toward that limit.
If deliberately slowing the pace of your import seems counterintuitive, that's because it is. Most of us try to get our imports to load faster, not slower, so you'll need to consider also whether this host's limits are going to work for you or whether you need a different plan.
You can get an idea of how many SQL statements are in the dump file with this:
grep -c . FILENAME
This will return a count of lines, which will be close to the number of statements... slightly higher, again, because of how mysqldump
files are formatted. This number divided by 36000 is approximately the fastest possible loading time, in hours, that you could achieve on this file, with this host, with these restrictions in place. For reference, I ran this against a randomly-selected mysqldump
file of 72GB, and the count is 475,309... so, we could estimate a restore time of maybe 13 hours for that particular dump file under these conditions. This number will, of course, vary significantly, so check it against your data source and see what you find.
Best Answer
It really depends which constraints are violated. You neglected to disclose that in your question.
To disable triggers (including foreign key constraints):
Be sure to revert it afterwards:
More in the manual.
However, this does not disable
Check
,Not-Null
,Unique
,Primary Key
orExclusion
constraints, which are implemented independently.You could drop and recreate those. But this is rarely useful, constraints are in place for a reason.
Alternative
An alternative would be to import your CSV file into a temporary copy of the target table and repair any data that would violate constraints before inserting from there. Depends on the size of the file and available RAM. You could also make this a regular table for huge files or longer processing.
For instance, to remove duplicates from imported data as well as between import and existing data: