Mysql – exceeding ‘max_updates’ resource when importing large database

importMySQL

I need to import a large database on my shared server.I'm using the following statement in a shell to import my database

mysql -u USERNAME -p PASSWORD --default-character-set=utf8 DATABASE_NAME < NAME_OF-FILE

Unfortunately, I receive the following error

User 'has exceeded the 'max_updates' resource (current value: 36000).

From my understanding max_updates is the number of updates that an account can issue per hour

My host doesn't want to temporarily increase this value.

Can someone recommend a better strategy of perhaps importing the database in smaller chunks to avoid triggering this limit.Perhaps there is a switch or option in mysql that would allow me to do this.

Best Answer

Yes, the "switch or option" that would allow you to do this is the one your host doesn't want to change. If there were a way for a user to simply work around resource limits, there wouldn't be much point in resource limits.

It's not obvious from the documentation, but the source code makes it fairly clear that this is a limit on the number of queries you can run, per hour, that change almost anything at all about the server. The use of the word "update" is unfortunate, because this resource limit impacts any query that does something like...

INSERT UPDATE DELETE TRUNCATE LOAD OPTIMIZE ANALYZE REPAIR GRANT REVOKE

... as well as any valid pairing of these:

[ CREATE | DROP | ALTER ] [ TABLE | TRIGGER | VIEW | PROCEDURE | FUNCTION | EVENT ]

If you take a look inside the dump file, it should become readily apparent that you're doing quite a lot of all those things.

But aside from the dump, for a minute, what sort of tiny little service will you be able to host on a server that only lets you execute 36,000/hour ... 600/minute ... 10/sec queries that INSERT/UPDATE/DELETE max, averaged over an hour? That's tiny... so you have either purchased something really small and inexpensive... or your host has already oversubscribed the server and is trying to mitigate that problem... or your data has already caused more loading than the host anticipated, so you earned a special extra "feature."

The most plausible workaround, to me, seems like it will be to pace your restore by imposing a small delay between each query that gets issued to the server, to stay within your time limit.

cat FILENAME | perl -MTime::HiRes -pe '$| = 1; Time::HiRes::usleep(100000)' | mysql -u USERNAME -p PASSWORD --default-character-set=utf8 DATABASE_NAME

You can see this in action without actually applying it to the server by just trying this:

cat FILENAME | perl -MTime::HiRes -pe '$| = 1; Time::HiRes::usleep(100000)'

You should see the dump file coming at you, one line at a time, use control-C to break out. The value 100000 is 100000 microseconds, or 1/10th of one second, which is approximately your limit. The server won't be executing the queries instantaneously, as it will require some number of milliseconds to process each one, so setting the value at 100000 will probably work. Also, there is some white space in mysqldump files, and statements like CREATE TABLE will be broken out on multiple lines, even though they'll only count as a single query, which will stretch out the time a bit, also.

Remember, though, that resource limits are per-user, so you can't be doing anything else on the server at the same time with the same account, or the two sessions, together, will combine to push you toward that limit.

If deliberately slowing the pace of your import seems counterintuitive, that's because it is. Most of us try to get our imports to load faster, not slower, so you'll need to consider also whether this host's limits are going to work for you or whether you need a different plan.

You can get an idea of how many SQL statements are in the dump file with this:

grep -c . FILENAME

This will return a count of lines, which will be close to the number of statements... slightly higher, again, because of how mysqldump files are formatted. This number divided by 36000 is approximately the fastest possible loading time, in hours, that you could achieve on this file, with this host, with these restrictions in place. For reference, I ran this against a randomly-selected mysqldump file of 72GB, and the count is 475,309... so, we could estimate a restore time of maybe 13 hours for that particular dump file under these conditions. This number will, of course, vary significantly, so check it against your data source and see what you find.