Mysql – How to speed up a loading data into MySQL from a zipped archive

innodbmyisamMySQLmysqldump

I dumped a large database to an archive with:

mysqldump --compress --max_allowed_packet=500M -h myhost mydb | gzip > /tmp/mydb.sql.gz

This took about 10 minutes to run, and generated a file about 400 MB in size.

I then tried to load it into a database server, configured identically to the original, but running on my localhost with:

gunzip < /tmp/mydb.sql.gz | mysql --quick --reconnect -u myuser -pmypass -h localhost -D mydb

This ran for over 12 hours before I finally killed it. What am I doing wrong that would cause it to dump so quickly but load so slowly?

My localhost is running MySQL 5.1 running on Ubuntu 11.10.

The database is designed as a data warehouse, using both InnoDB and MyISAM tables, so most columns are indexed, causing fast reads but slow writes. Is MySQL enabling these indexes before all data is loaded, and attempting to update the index after inserting each row?

Best Answer

Please keep in mind the behavior a mysqldump

  • It dumps all databases in alphabetical order
  • It dumps all tables in each database in alphabetical order
  • It loads each tables like this
    • Drops the table
    • Creates the Table
    • Table is Loaded
      1. Disables Keys
      2. Locks the Table for Writes
      3. Multiple Exteneded INSERTs
      4. Unlock the Table
      5. Enable Keys
    • When the keys of table are disabled, only PRIMARY KEYs and UNIQUE KEYs are reloaded. When keys are enabled, then all non-unique keys are built linearly.

I would recommend you add this to /etc/my.cnf setting bulk_insert_buffer_size as follows

[mysqld]
bulk_insert_buffer_size=256M
key_buffer_size=1G

because the default is 8M. You do not need to restart mysql. Just run this SQL command

SET GLOBAL bulk_insert_buffer_size = 1024 * 1024 * 256;

Give it a Try !!!