Converting a large schema to file-per-table and I will be performing a mysqldump/reload with –all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times?
SCHEMA stats:
26 myisam tables
413 innodb
~240GB of data
[–opt= –disable-keys; –extended-insert; –quick, etc]
–no-autocommit ??
vs prepending session vars like:
"SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"
Are the mysqldump options equivalent or not really?
Thanks for your advice!
Best Answer
ASPECT #1
While setting innodb_max_dirty_pages_pct to 0 is good to do prior to a dump, you will have to wait until the dirty page count falls below 1% of the InnoDB Buffer Pool size. Here is how you can measure it:
Keep running this report until
PercentageDirty
reaches close to 1.00. Perhaps you could just setinnodb_max_dirty_pages_pct
to 0 one hour before the dump.If you do not change
innodb_max_dirty_pages_pct
, a mysqldump will forec a flush of dorty blocks involving the table you are dumping.ASPECT #2
You should not have to prepend
"SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"
because a mysqldump includes them at the beginning of the dump. Here is a sample mysqldump's header (Please note the two lines afterTIME_ZONE
)ASPECT #3
Please run this query
I ran this and got 25 for MySQL 5.5.23. Since you have 26 you have only 1 tables outside the mysql schema. To find it, run this:
If you stop writing to the one lone table, you should be able to mysqldump all databases just fine.
ASPECT #4
All the needed options for
--opt
are adequate. No need to alter it.ASPECT #5
You may want to dump the databases into different file: Please see my
Apr 17, 2011
post How can I optimize a mysqldump of a large database? on how to script parallel mysqldumps.