I dumped a database (sys_data
) which is very big (800GB, all data in one ibdata
file) from a remote server. But the dump was blocked at a table (tb_trade_376)
.
My dump command:
mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql
When the dump was blocked:
show processlist;
5306612 | root | 192.168.1.161:57180 | sys_data
| Query | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`
On the other hand I can dump the table tb_trade_376
successfully if I just dump the table only.
mysqldump -uxx -pxx -h192.168.1.xxx \
--single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql
This works well and quickly!
The table tb_trade_376
has about 700,000-800,000 rows.
What is the next step in investigating why I can't dump the whole database? How can I make it work?
Best Answer
Conjecture #1 : Single Transaction
The dump for the
sys_data
database has to create far more MVCC information to load and dump from ibdata1. Please Click Here to See the InnoDB Infrastructure Map and notice the section in ibdata1 that has 1023 undo logs.When you were dumping an entire database, mysqldump will export the tables in alphabetical order. There can times that undo logs are populated with row data in the event of a crash and recovery as needed. Perhaps those logs are being loaded and unloaded from previous dumps. The undo space populated by the dump of previous tables (tables before
tb_trade_376
) may need some housecleaning performed while the dump oftb_trade_376
is in progress. This would be particularly true for a very busy server where INSERTs, UPDATEs, and DELETEs are being done to the previous tables.Conjecture #2 : InnoDB Buffer Pool
If you are not writing anything to the database at all, another place to look would be the InnoDB Buffer Pool. Think about it: the data pages of every table being dumped would have to be loaded in the Buffer Pool simply because you are doing a
SELECT
. TheSQL_NO_CACHE
prevents the query results from entering the query cache, but does not prevent data movement in and out of the Buffer Pool. The data pages accessed from the previous tables have to be invalidated and overwritten in the Buffer Pool for each and every table prior totb_trade_376
.Conjecture #3 : innodb_file_per_table
If innodb_file_per_table is disabled, then everything and its grandmother is in
ibdata1
. What would it contain? ibdata1 would have the following:This means that the Undo Logs are competing for space with the data and index pages of every table. This would cause ibdata1 to grow rapidly. If you have innodb_file_per_table disabled, you need to cleanup the InnoDB Infrastructure to keep InnoDB tables out of ibdata1.
Conjecture #4 : MyISAM Tables
How can MyISAM tables cause a problem? If any of the tables in the mysqldump are MyISAM and still receiving INSERTs, UPDATEs, and DELETEs during the database dump, it could possibly disable the checkpoint mechanism of the
--single-transaction
option midstream. This is plausible because MyISAM is a non-transactional storage engine. If any of the previous tables are MyISAM and are still receiving writes, all bets are off for all the tables in the dump to have the same point-in-time. Each table being dumped after a MyISAM table in encountered is basically on it own in a transactional sense.SUMMARY
You may have one or more of these issues going on. Please compensate with one of the following:
Give it a Try !!!