innodb_log_file_size
determines the size of the redo log. In short, this is a temporary storage zone on disk where data changes are buffered before being inserted in the actual table data files. It also stores temporary data that may be generated during the course of a transaction.
The first, most obvious advantage of this buffer is that it mitigates the problem of random writes, which is what would happen otherwise in a typical real-life situation. Writes in the redo log are done sequentially, which is quicker. Randomly located changes may be committed to the actual data tables at a later time, when, hopefully, the activity decreases.
The second advantage is it provides protection against data corruption in case of crash. Since a coherent version of the data (almost) always exists, either in the data table, or in the redo log, the risk of data corruption is lowered, since it should alwys be possible to restore the database to a coherent state. Other mechanisms exist on top of this one for the same purpose (eg. double-buffering)
As the manual puts it:
Sensible values range from 1MB to 1/N-th of the size of the buffer pool, where N is the number of log files in the group. (...) The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower.
innodb_log_buffer_size
defines the size of the memory buffer where data is stored before being flushed to the redo log.
The larger the better, but make sure to leave enough memory for other buffers. Also, I do not think there is a point in allocating more than the total redo log size (ie. innodb_log_file_size
x innodb_log_files_in_group
.
Books the size of a dictionary can be written about database tuning, this cannot be addressed here. This manual page is a good starting point. If you have identified a specific bottleneck, please come back with relevant information and I (we) will be happy to help you.
You can kind of do this from MySQL 5.6 and onwards using information_schema
. I have an example of finding the ideal buffer pool size on my blog here: http://www.tocker.ca/2013/05/31/estimating-mysqls-working-set-with-information_schema.html
The caveat is that you may need to either restart or lower the buffer pool size first. Inactive pages will just stay in memory if there is no need to make free space - which could skew your result on a server that's been running for a while, yet has plenty of memory.
Best Answer
Background info... The "torn page" problem occurs when part of an InnoDB block is written to disk, but the physical write died before writing all the low level (usually 512-byte) blocks. This leads to an unreadable block for InnoDB. The double-write buffer, and its extra write, makes it possible to recover from a torn page.
The hardware needs to "guarantee" atomic writes of 16KB. FusionIO is the only drive that I know of to provide that guarantee. And that gives them a advantage by saying that you can turn off double-writes.
RAID controllers with a battery-backed-write-cache should be able to provide such a guarantee, but it is unclear whether the OS confuses the issue. But, then, writes are virtually free, so turning off double-write does not matter much.
I hesitate to say yes or no to your question because I don't know what is underneath the "filesystem". It may turn the InnoDB block into multiple 512-byte disk writes, thereby still allowing torn pages.
O_DIRECT is a related topic to look at.
Setting innodb_page_size to 4K has other issues (even if you could get the drive to play nice) --