Another way to get past these possible problems is to use a Galera Cluster (eg, MariaDB 10 or PXC). With Galera, you can (should) be sloppy about such syncs; instead rely on the cluster's ability to recover automatically and efficiently from a dead machine, whether dead from power, OS, or whatever. One caveat, though: You need to make sure a single failure (such as power) won't kill all nodes in the cluster simultaneously.
With Galera, you can write and read any node. 3 is the minimal number of nodes to provide automatic recovery from the loss of any one node. You can configure it to do all writes to a specific node, thereby pretending to have Master+Slaves configuration. (In some applications, this is actually beneficial.)
Back to your situation. Set both of those settings to 1 or the Master; set them to 50 and 2 for the Slaves, then... If a Slave dies, assume it is corrupt and rebuild it; do not try to recover the data, since the necessary syncs may not have been done. To lighten the load on the Master, don't send standalone reads to it.
You already linked rolando's post on how to do the migration. You can change the setting online to make use of per table files, but this will not fix your huge ibdata file size, even though single files are used afterwards.
To discern the time needed, try doing it on another installation you recreate from your backups. That way you will get reasonable numbers (as long as the systems have similar specs and hardware being used) how long this process will take, also you can try it out before laying hands onto the production system.
If you have hard uptime requirements another way to do this online might be through replication. To create a slave, you have create an initial mysqldump with special parameters, but this shouldn't be a problem since I guess you create your backups via dumps anyway?
When setting up the replication, you can let the slave run with innodb_file_per_table
even though the master does not. Once all things are set, put the masters' ip onto the slave.
If all this is not possible due to the lack of hardware, then your system with the harsh uptime requirements was maybe sold too cheaply and you now have the ungrateful job to make something work with risks that should have been adressed in the initial planning by getting an adequate test setup besides your production setup.
For getting your mysqldumps to speed up, try piping them directly into parallel bzip, to get the cores to do more work and save time by reducing the needed disk accesses. Rougly like this:
# DUMP
mysqldump ... | pbzip2 -c > MYSQLDUMP.sql
# REPLAY
pbzip2 -dkc MYSQLDUMP.sql | mysql (...)
Best Answer
The answer depends on which version of MySQL you are using. Meanwhile, here are two 'old' answers:
innodb_log_file_size
should be at least 10 times the size of the largestBLOB
orTEXT
being used. (That has been relaxed recently some.)Adjust
innodb_log_file_size
so the following is roughly 60. This computes how often (in minutes) the log is rotated. Once every 60 minutes is recommended, but if this gives you anywhere between 20 and 180, your setting is probably fine.The recovery time has improved recently. So, again, upgrading may be useful.
Meanwhile, adjust according to the above guidelines.