I have a MySQL database with some large tables. In particular, the largest table (by far) currently has ~11,000,000 rows. According to the a query I ran, the table clocks in at around 1.7GB, but it's MyISAM and apparently that number is unreliable (but it's a big table, anyway). The database is backed up daily using mysqldump
.
Currently, restoring the database from this dump file takes ~10 hours, and is only going to get worse. I'm using the --skip-extended-insert
option because restoration will fail with a single multi-valued insert statement for all rows in the table. My understanding is that max_allowed_packet
can be extended up to 1GB, but that's still not enough.
I thought that this would be a common enough problem and might have a generally accepted solution but I'm having trouble finding anything. So, my questions are:
- Does
mysqldump
have an option that allows multi-valued inserts to be batched for each table e.g. instead of creating a single insert, create multiple inserts forn
rows - If not, is there some other utility/alternative to accomplish this, or do I have to roll my own backup script that queries for table rows in batches of
n
using the limit clause and manually create the insert statements myself?
MySQL v5.5.40
Tables are a mixture of MyISAM and InnoDB.
The backup takes place locally.
Best Answer
Your issue may involve the configuration settings for either OS or MySQL
HISTORICAL EXAMPLES
Years ago when I worked at a website firm, I used --skip-extended-insert to load a DB Server that had 2GB RAM. That made a whole world of sense to me because the server I dumped it from had 64GB RAM and the client wanted the data on a smaller machine.
At my current placement of employment, an outside vendor dumped data from SQL Server and produced a mysqldump compatible dumpfile. The problem with that dump file was each table had a single insert with all the rows. One table had 3.5 million. They could not be loaded into in-house VM will less than 8GB RAM configured. Shockingly, the solution to that was to scp the file to may laptop, load it into MySQL 5.5.37 on my laptop, mysqldump from it, send the new mysqldump back to the Linux CentOS VM, and load the mysqldump.
BTW, the 3.5 million row table loaded as a single insert into my Windows 7 laptop in 4.5 hours without changing any MySQL settings. To be honest with you, I did believe that "shot in the dark" would work because I did not reconfigure mysqldump on the laptop to work with the dump. My guess is that it probably loaded because the undo log in ibdata1 just grew on disk and held everything.
You should not have to live to
skip-extended-insert
if you dump on reload on the same machine.SUGGESTIONS
While I was typing my answer, I saw you inject the following into the question
SUGGESTION #1
If that one MyISAM table has 1.7GB and has the 11 million rows, my suggestion would be for you to increase the size of the bulk insert buffer. The bulk insert buffer is for MyISAM usage only and supports extended inserts. Set bulk_insert_buffer_size to 256M.
Go to my.cnf or my.ini and add this
Then, login to MySQL and run
SUGGESTION #2
As for your InnoDB, you need to use a bigger log buffer. Set it to 64M by adding this to my.cnf
You must restart MySQL to use that new value.
SUGGESTION #3
You may also need bigger InnoDB Logs by increasing innodb_log_file_size. This require manual intervention as follows:
STEP 01: Run this command as root
STEP 02: Shutdown mysql
service mysql stop
net stop mysql
STEP 03: Add this option to
my.cnf
ormy.ini
STEP 04: In the OS, goto the folder where ib_logfile0 and ib_logfile1 are located and run
Linux
Windows
STEP 05: Start mysql (the log files get recreated so start will take an extra 1-2 min)
service mysql start
net start mysql
After making one or more of these suggested changes, try dumping and reloading. It goes go much smoother. Start with just
SUGGESTION #1
and try it. Try the other other two to see if it improves more.SUMMARY
You need these options for your target MySQL DB Server in your my.cnf or my.ini
GIVE IT A TRY !!!