Mysql – High amount of Read Misses and Pages To Be Flushed

innodbMySQL

I am running a mysql database backend for a Moodle installation, and after a few months performance really starts to suffer (Up to 30 seconds for some pages to load). Under investigation in the InnoDB buffer pool, I found that the buffer pool size seemed to be correct (innodb_buffer_pool_wait_free = 0). However, I also found that I have an exceedingly high percentage of Read Misses (52%) and what seems like a rather large amount of Pages To Be flushed (31 million). I'm currently running the slow query log, but the lag on a page loading seems like too much to be from simply an unoptimized query.

I haven't been able to find any explanation of why those could both be so high. Does anybody have an explanation for why Read Misses and Pages To Be Flushed would have those results?

Update:
I am restarting the servers on a weekly basis during a scheduled down-period. I still cannot imagine why this is getting so large. Is there no auto-flush mechanism built-in?

Best Answer

What really jumps at me is that 52% miss. It's possible your innodb_buffer_pool_size is just too small.

Keep in mind that the InnoDB Buffer Pool caches data pages and index pages.

How can you get a good size for it? Run this query:

SELECT
    CONCAT(CEILING(ibps/POWER(1024,1)),'K') IBPS_KB,
    CONCAT(CEILING(ibps/POWER(1024,2)),'M') IBPS_MB,
    CONCAT(CEILING(ibps/POWER(1024,3)),'G') IBPS_GB
FROM (SELECT SUM(data_length+index_length) ibps
FROM information_schema.tables WHERE engine='InnoDB') A;

This will give you something like this

mysql> SELECT
    ->     CONCAT(CEILING(ibps/POWER(1024,1)),'K') IBPS_KB,
    ->     CONCAT(CEILING(ibps/POWER(1024,2)),'M') IBPS_MB,
    ->     CONCAT(CEILING(ibps/POWER(1024,3)),'G') IBPS_GB
    -> FROM (SELECT SUM(data_length+index_length) ibps
    -> FROM information_schema.tables WHERE engine='InnoDB') A;
+-----------+---------+---------+
| IBPS_KB   | IBPS_MB | IBPS_GB |
+-----------+---------+---------+
| 30333520K | 29623M  | 29G     |
+-----------+---------+---------+
1 row in set (11.43 sec)

mysql>

If the output gives you settings beyond 80% of the DB Server's Installed RAM, then use 80% of whatever the installed RAM is as the innodb_buffer_pool_size.

Give it a Try !!!

UPDATE 2013-04-15 12:43EDT

Let's look at the definition of Innodb_buffer_pool_wait_free

Normally, writes to the InnoDB buffer pool happen in the background. However, if it is necessary to read or create a page and no clean pages are available, it is also necessary to wait for pages to be flushed first. This counter counts instances of these waits. If the buffer pool size has been set properly, this value should be small.

As stated, if the buffer pool size has been set properly, this value should be small. You may simply have lots of dirty pages in the Buffer Pool that need flushing to disk. You should be monitoring Innodb_buffer_pool_pages_dirty.

There are two things you could do to improve the situation:

IMPROVEMENT #1 : Upgrade to the latest MySQL

I trust MySQL 5.5. I have a client going to MySQL 5.6.10 soon. I trust it as well. These versions of MySQL have the InnoDB Plugin standard. They flush dirty pages much more efficiently.

You can also tune InnoDB. Under MySQL 5.1, there are 4 read IO threads and 4 write IO threads. MySQL 5.5+ allows you to increase these for better read and write InnoDB performance. InnoDB For MySQL 5.5.+ can access multiple CPUs/Core. MySQL 5.1 can do this if using MySQL 5.1.38+ and you install the InnoDB Plugin (IMHO too messy, go with MySQL 5.5/5.6). MySQL 5.1.27 cannot do this.

IMPROVEMENT #2 : Get Dirty Pages to Flush More Frequently

You can do this immediately with

SET GLOBAL innodb_max_dirty_pages_pct = 0;

The default value for innodb_max_dirty_pages_pct in MySQL 5.1 is 90. Drop this to zero(0). Then, start watching Innodb_buffer_pool_pages_dirty. On a busy write server, this should drop to 1% of Innodb_buffer_pool_pages_total.