I dumped a large database to an archive with:
mysqldump --compress --max_allowed_packet=500M -h myhost mydb | gzip > /tmp/mydb.sql.gz
This took about 10 minutes to run, and generated a file about 400 MB in size.
I then tried to load it into a database server, configured identically to the original, but running on my localhost with:
gunzip < /tmp/mydb.sql.gz | mysql --quick --reconnect -u myuser -pmypass -h localhost -D mydb
This ran for over 12 hours before I finally killed it. What am I doing wrong that would cause it to dump so quickly but load so slowly?
My localhost is running MySQL 5.1 running on Ubuntu 11.10.
The database is designed as a data warehouse, using both InnoDB and MyISAM tables, so most columns are indexed, causing fast reads but slow writes. Is MySQL enabling these indexes before all data is loaded, and attempting to update the index after inserting each row?
Best Answer
Please keep in mind the behavior a mysqldump
I would recommend you add this to /etc/my.cnf setting bulk_insert_buffer_size as follows
because the default is 8M. You do not need to restart mysql. Just run this SQL command
Give it a Try !!!