Assuming you can't further increase the memory size then there are a couple of options.
If you aren't familiar with the command line and you really want to stick with a GUI style import then you can use BigDump (http://www.ozerov.de/bigdump/). I used it once but it's been a while. From what I remember you will download a file named bigdump.php (with some instructions) and put it on your webserver in the directory with the MySQL DB dump file that is too large to import via PHPMYADMIN. Then navigate to it using your browser - something like http://your-website.com/bigdump.php.
If you are familiar with the command line and using a Linux based system then you can use code like like this:
mysql -u USERNAME -p DATABASENAME < FILENAME.sql
The database and user (with privileges to the database) will need to exist prior to running this command though. Note I copied the above command from another source. I always do my dumps and restores like this:
DUMP: mysqldump -u DB_USER -h DB_HOST -pDB_PASSWORD DB_NAME | gzip -9 > DB_NAME.sql.gz
RESTORE: gunzip < PATH_TO_DUMP/DB_NAME.sql.gz | mysql -u DB_USER -pDB_PASSWORD DB_NAME
- Finally, and most painfully, you could choose to dump individual tables or groups of tables into small enough dumps. Then restore these individual dumps one at a time via PHPMYADMIN.
FTP & RESTORE instructions:
Download FileZilla here: http://filezilla-project.org/download.php?type=client
Typically you can FTP into your server (especially a shared host) using the same login and password you would for SSH (you command line login). Just make sure to set up the connection to your server properly in FileZilla.
Once connected your default path will likely be your $HOME folder (on nix machines). If you aren't planning on doing a lot of FTPing just dump what you want in there. You can always move it from the command line.
If you FTP'd a MySQL Dump into your $HOME folder then the RESTORE command for a DB named mydb and a db dump file named mydb.sql.gz (in the $HOME folder) would be:
gunzip < $HOME/mydb.sql.gz | mysql -u DB_USER -pDB_PASSWORD mydb
where you would obviously need change the DB_USER and DB_PASSWORD values to what user and password you setup for the DB.
Remember:
- The DB must already exist (but be empty) before the RESTORE command is executed
- The DB_USER and DB_PASSWORD must be setup for the DB before the RESTORE command (you can do this from PHPMYADMIN - Privileges tab)
- When you set up DB_USER from PHPMYADMIN it will default to only allowing DB_USER to access the DB from localhost (the server the DB is hosted on). I'm assuming this is fine for your situation grant. When you get into more advanced DUMPs and RESTOREs you may want to do it from another machine entirely which isn't hard but requires a couple more pieces of information be provided and DB_USER with the proper privileges.
FTP the DB and RESTORE it... Step by Step
- Locate zipped MySQL dump file on your desktop (Example: C:\Web\DB\Dumps\mydb.sql.zip)
- Download and install FileZilla
- In FileZilla go to File->Site Manager and click on the New Site button (lower left quadrant of GUI). Set the host to a domain (Example: dest.com) on that resides on your DESTINATION server. Set Protocol to FTP or SFTP (this depends on your hosting, ask them). Set the User and Password (Example: User = grant, Password = grant's password) - if the FTP protocol is SFTP then any set of valid SSH login credentials should work.
- (In FileZilla) Click Connect (bottom left most button)
- (In FileZilla) Once connected to the remote server (should be your DESTINATION server) use the left side of FileZilla to navigate to the directory where the dump file you located in Step 1 resides. (The path for the local machine is on the far left, about half way down and it is labeled Local site:. In our example the textbox next to Local site: would read C:\Web\DB\Dumps.)
- (In FileZilla) Locate the Remote site: label directly to the right of Local Site:.
- (In FileZilla) There is no technical reason you can't put the DB DUMP in the folder tha FileZilla opens up to. It will likely default to your $HOME directory. If you don't have any reason to put the DB Dump in a specific directory - such as for your own organizational purposes - then leave the path on the Remote site alone. Write down that path (Linux Example: /usr/grant).
- (In FileZilla) Locate the file on your local machine on the left half of FileZilla just under the Tree shown beneath the Local site: label you found in Step 5. Double click on that file and FileZilla will FTP it to your Remote Site.
- Within PHPMYADMIN on your DESTINATION server, create a new database (type mydb into the Create new database textbox and Click the Create button).
- (In PHPMYADMIN) Choose mydb from the menu on the left.
- (In PHPMYADMIN) Click on the Privileges tab (upper right). Once on that page click on Add New User.
- (In PHPMYADMIN->Privileges->Add New User) Provide a username (Example: mydb_grant) and password (Example: mydb_password). In the Host dropdown choose Local. Make sure that the radio button next to "Grant all privileges on database" is selected. Then click Go (bottom right of screen).
- SSH into your DESTINATION server.
- (In your SSH terminal) Navigate to the directory your wrote down in Step 7. Using a command like -
cd $HOME
or cd /usr/grant
. (On Linux machines $HOME is an environment variable that will equal something like /usr/grant.)
- (In your SSH terminal) Execute the following command:
gunzip < /usr/grant/mydb.sql.gz | mysql -u mydb_grant -pmydb_password mydb
(there is no space between -p and the password because that is the way it needs to be, that is not a typo).
Once that last command completes your DB has been RESTORED. I don't recommend using the DB user and password I provided simply replace them with more secure choices of your own.
DUMP and RESTORE on different servers without FTP
If you wish to skip the FTP stuff and don't mind redoing the DUMP then you can do it this way:
- Login into the DESTINATION server (where you want the DB to reside - we'll call it destination.com)
- Create the DB, User, and proper privileges as mentioned above on the DESTINATION server
- Make sure the privileges on the SOURCE server allows for SOURCE_DB_USER to access the DB remotely - either allow access from dest.com or from % (% - everyone and everywhere if they have the right username and password)
- On the DESTINATION server run:
mysqldump -u SOURCE_DB_USER -h source.com -pSOURCE_DB_PASSWORD mydb| gzip -9 > $HOME/mydb.sql.gz
where source.com is the SOURCE server (where the DB currently resides)
- Then (on the DESTINATION server) run:
gunzip < $HOME/mydb.sql.gz | mysql -u DEST_DB_USER -pDEST_DB_PASSWORD mydb
That should be all you need to do to move the DB from one server to another. Caveats are that I've only ever done this on my own dedicated Ubuntu (with standard LAMP) and Dreamhost's VPS servers (which I believe run CentOS but I could be wrong).
Yes, the "switch or option" that would allow you to do this is the one your host doesn't want to change. If there were a way for a user to simply work around resource limits, there wouldn't be much point in resource limits.
It's not obvious from the documentation, but the source code makes it fairly clear that this is a limit on the number of queries you can run, per hour, that change almost anything at all about the server. The use of the word "update" is unfortunate, because this resource limit impacts any query that does something like...
INSERT UPDATE DELETE TRUNCATE LOAD OPTIMIZE ANALYZE REPAIR GRANT REVOKE
... as well as any valid pairing of these:
[ CREATE | DROP | ALTER ] [ TABLE | TRIGGER | VIEW | PROCEDURE | FUNCTION | EVENT ]
If you take a look inside the dump file, it should become readily apparent that you're doing quite a lot of all those things.
But aside from the dump, for a minute, what sort of tiny little service will you be able to host on a server that only lets you execute 36,000/hour ... 600/minute ... 10/sec queries that INSERT
/UPDATE
/DELETE
max, averaged over an hour? That's tiny... so you have either purchased something really small and inexpensive... or your host has already oversubscribed the server and is trying to mitigate that problem... or your data has already caused more loading than the host anticipated, so you earned a special extra "feature."
The most plausible workaround, to me, seems like it will be to pace your restore by imposing a small delay between each query that gets issued to the server, to stay within your time limit.
cat FILENAME | perl -MTime::HiRes -pe '$| = 1; Time::HiRes::usleep(100000)' | mysql -u USERNAME -p PASSWORD --default-character-set=utf8 DATABASE_NAME
You can see this in action without actually applying it to the server by just trying this:
cat FILENAME | perl -MTime::HiRes -pe '$| = 1; Time::HiRes::usleep(100000)'
You should see the dump file coming at you, one line at a time, use control-C
to break out. The value 100000 is 100000 microseconds, or 1/10th of one second, which is approximately your limit. The server won't be executing the queries instantaneously, as it will require some number of milliseconds to process each one, so setting the value at 100000 will probably work. Also, there is some white space in mysqldump
files, and statements like CREATE TABLE
will be broken out on multiple lines, even though they'll only count as a single query, which will stretch out the time a bit, also.
Remember, though, that resource limits are per-user, so you can't be doing anything else on the server at the same time with the same account, or the two sessions, together, will combine to push you toward that limit.
If deliberately slowing the pace of your import seems counterintuitive, that's because it is. Most of us try to get our imports to load faster, not slower, so you'll need to consider also whether this host's limits are going to work for you or whether you need a different plan.
You can get an idea of how many SQL statements are in the dump file with this:
grep -c . FILENAME
This will return a count of lines, which will be close to the number of statements... slightly higher, again, because of how mysqldump
files are formatted. This number divided by 36000 is approximately the fastest possible loading time, in hours, that you could achieve on this file, with this host, with these restrictions in place. For reference, I ran this against a randomly-selected mysqldump
file of 72GB, and the count is 475,309... so, we could estimate a restore time of maybe 13 hours for that particular dump file under these conditions. This number will, of course, vary significantly, so check it against your data source and see what you find.
Best Answer
You will need to increase the openfiles in ulimit.
More detailed info here: https://duntuk.com/how-raise-ulimit-open-files-and-mysql-openfileslimit