What really jumps at me is that 52% miss. It's possible your innodb_buffer_pool_size is just too small.
Keep in mind that the InnoDB Buffer Pool caches data pages and index pages.
How can you get a good size for it? Run this query:
SELECT
CONCAT(CEILING(ibps/POWER(1024,1)),'K') IBPS_KB,
CONCAT(CEILING(ibps/POWER(1024,2)),'M') IBPS_MB,
CONCAT(CEILING(ibps/POWER(1024,3)),'G') IBPS_GB
FROM (SELECT SUM(data_length+index_length) ibps
FROM information_schema.tables WHERE engine='InnoDB') A;
This will give you something like this
mysql> SELECT
-> CONCAT(CEILING(ibps/POWER(1024,1)),'K') IBPS_KB,
-> CONCAT(CEILING(ibps/POWER(1024,2)),'M') IBPS_MB,
-> CONCAT(CEILING(ibps/POWER(1024,3)),'G') IBPS_GB
-> FROM (SELECT SUM(data_length+index_length) ibps
-> FROM information_schema.tables WHERE engine='InnoDB') A;
+-----------+---------+---------+
| IBPS_KB | IBPS_MB | IBPS_GB |
+-----------+---------+---------+
| 30333520K | 29623M | 29G |
+-----------+---------+---------+
1 row in set (11.43 sec)
mysql>
If the output gives you settings beyond 80% of the DB Server's Installed RAM, then use 80% of whatever the installed RAM is as the innodb_buffer_pool_size.
Give it a Try !!!
UPDATE 2013-04-15 12:43EDT
Let's look at the definition of Innodb_buffer_pool_wait_free
Normally, writes to the InnoDB buffer pool happen in the background. However, if it is necessary to read or create a page and no clean pages are available, it is also necessary to wait for pages to be flushed first. This counter counts instances of these waits. If the buffer pool size has been set properly, this value should be small.
As stated, if the buffer pool size has been set properly, this value should be small
. You may simply have lots of dirty pages in the Buffer Pool that need flushing to disk. You should be monitoring Innodb_buffer_pool_pages_dirty.
There are two things you could do to improve the situation:
IMPROVEMENT #1 : Upgrade to the latest MySQL
I trust MySQL 5.5. I have a client going to MySQL 5.6.10 soon. I trust it as well. These versions of MySQL have the InnoDB Plugin standard. They flush dirty pages much more efficiently.
You can also tune InnoDB. Under MySQL 5.1, there are 4 read IO threads and 4 write IO threads. MySQL 5.5+ allows you to increase these for better read and write InnoDB performance. InnoDB For MySQL 5.5.+ can access multiple CPUs/Core. MySQL 5.1 can do this if using MySQL 5.1.38+ and you install the InnoDB Plugin (IMHO too messy, go with MySQL 5.5/5.6). MySQL 5.1.27 cannot do this.
IMPROVEMENT #2 : Get Dirty Pages to Flush More Frequently
You can do this immediately with
SET GLOBAL innodb_max_dirty_pages_pct = 0;
The default value for innodb_max_dirty_pages_pct in MySQL 5.1 is 90. Drop this to zero(0). Then, start watching Innodb_buffer_pool_pages_dirty. On a busy write server, this should drop to 1% of Innodb_buffer_pool_pages_total.
MySQL can still write the changes to the binlogs (completed SQL commands) without concern for the results making it to disk.
InnoDB is responsible for flushing two-phase commits of data changes from logs to final destination (.ibd
for innodb_file_per_table or ibdata1
otherwise). If InnoDB cannot flush data from its logs, there will be some housecleaning info stuck in the Redo Logs (ib_logfile0,ib_logfile1), the Undo Logs (1023 of them inside ibdata1) and the InnoDB Buffer Pool. Please see the InnoDB Infrastructure Map. Consequently, new transactions cannot never be written (or even started).
This is very possible if
Please run the following
Flush everything from all logs and shutdown mysql
mysql -uroot -p -Ae"SET GLOBAL innodb_fast_shutdown = 0;"
service mysql stop
Next, change the logs to have a larger size along with a larger log buffer
[mysqld]
innodb_log_file_size = 1G
innodb_log_buffer_size = 32M
Then, remove logs and start mysql
cd /var/lib/mysql
mv ib_logfile0 ib_logfile0.bak
mv ib_logfile1 ib_logfile1.bak
service mysql start
I discussed this before : Mysql Innodb: InnoDB: ERROR: the age of the last checkpoint is InnoDB: which exceeds the log group capacity
Give it a Try !!!
Best Answer
Your answers are on the source code: innodb_buffer_pool_pages_dirty. Most of these variables are maintained on memory directly or through very simple calculations. There is no danger of durability loss, as the ones that are "essential" for innodb to work can be recalculated from the transaction log or are 0 on start.
Not really, the lsn is just an offset in bytes, it is maintained on memory (log_sys->lsn) and in the case of a failure, it can be recovered from the header of the transaction log (plus pure file offsets). They are changed on commit/log flush/tablespace flush, both on memory and on disk (exact method depends on the durability configuration, buffering type, etc.).
For basic InnoDB stuff, you can check this blog post by the Oracle InnoDB team and/or check the source code for InnoDB.