What really jumps at me is that 52% miss. It's possible your innodb_buffer_pool_size is just too small.
Keep in mind that the InnoDB Buffer Pool caches data pages and index pages.
How can you get a good size for it? Run this query:
SELECT
CONCAT(CEILING(ibps/POWER(1024,1)),'K') IBPS_KB,
CONCAT(CEILING(ibps/POWER(1024,2)),'M') IBPS_MB,
CONCAT(CEILING(ibps/POWER(1024,3)),'G') IBPS_GB
FROM (SELECT SUM(data_length+index_length) ibps
FROM information_schema.tables WHERE engine='InnoDB') A;
This will give you something like this
mysql> SELECT
-> CONCAT(CEILING(ibps/POWER(1024,1)),'K') IBPS_KB,
-> CONCAT(CEILING(ibps/POWER(1024,2)),'M') IBPS_MB,
-> CONCAT(CEILING(ibps/POWER(1024,3)),'G') IBPS_GB
-> FROM (SELECT SUM(data_length+index_length) ibps
-> FROM information_schema.tables WHERE engine='InnoDB') A;
+-----------+---------+---------+
| IBPS_KB | IBPS_MB | IBPS_GB |
+-----------+---------+---------+
| 30333520K | 29623M | 29G |
+-----------+---------+---------+
1 row in set (11.43 sec)
mysql>
If the output gives you settings beyond 80% of the DB Server's Installed RAM, then use 80% of whatever the installed RAM is as the innodb_buffer_pool_size.
Give it a Try !!!
UPDATE 2013-04-15 12:43EDT
Let's look at the definition of Innodb_buffer_pool_wait_free
Normally, writes to the InnoDB buffer pool happen in the background. However, if it is necessary to read or create a page and no clean pages are available, it is also necessary to wait for pages to be flushed first. This counter counts instances of these waits. If the buffer pool size has been set properly, this value should be small.
As stated, if the buffer pool size has been set properly, this value should be small
. You may simply have lots of dirty pages in the Buffer Pool that need flushing to disk. You should be monitoring Innodb_buffer_pool_pages_dirty.
There are two things you could do to improve the situation:
IMPROVEMENT #1 : Upgrade to the latest MySQL
I trust MySQL 5.5. I have a client going to MySQL 5.6.10 soon. I trust it as well. These versions of MySQL have the InnoDB Plugin standard. They flush dirty pages much more efficiently.
You can also tune InnoDB. Under MySQL 5.1, there are 4 read IO threads and 4 write IO threads. MySQL 5.5+ allows you to increase these for better read and write InnoDB performance. InnoDB For MySQL 5.5.+ can access multiple CPUs/Core. MySQL 5.1 can do this if using MySQL 5.1.38+ and you install the InnoDB Plugin (IMHO too messy, go with MySQL 5.5/5.6). MySQL 5.1.27 cannot do this.
IMPROVEMENT #2 : Get Dirty Pages to Flush More Frequently
You can do this immediately with
SET GLOBAL innodb_max_dirty_pages_pct = 0;
The default value for innodb_max_dirty_pages_pct in MySQL 5.1 is 90. Drop this to zero(0). Then, start watching Innodb_buffer_pool_pages_dirty. On a busy write server, this should drop to 1% of Innodb_buffer_pool_pages_total.
This is a complex topic which is very interesting - I/O, caching - i.e. what's really happening?
Percona always have good stuff about every aspect of MySQL performance - check here. Mark Leith (a senior MySQL devlelopment manager) is worth a look here. Be sure to follow the links to Baron Schwartz's and Brendan Gregg's stuff, both big hitters in the MySQL world. Morgan Tocker is a MySQL community manager and has this to say.
[EDIT - in response to comment by OP]
As I said - it's complex. What is i/o as far as Google/Amazon are concerned? See my own answer (interpretation) to this question below? How do they factor in caching? I'm sure they do it in the background. I do notice that
Amazon appear to charge by GB/month in/out, so you don't care about disk or queries with them.
Google likewise appear to charge by data in/out. So, many small queries will be equal to a few large ones - seems fair.
Both also charge for RAM, so a complex app will also increase price - again, not unreasonably.
My reading of the data/in out charge is that this only counts for bandwidth between your end users and Amazon or Google - I/O processing done within Amazon/Google is covered by your arrangement with them.
Get your calculator out! :-) IANAL!
Best Answer
You can kind of do this from MySQL 5.6 and onwards using
information_schema
. I have an example of finding the ideal buffer pool size on my blog here: http://www.tocker.ca/2013/05/31/estimating-mysqls-working-set-with-information_schema.htmlThe caveat is that you may need to either restart or lower the buffer pool size first. Inactive pages will just stay in memory if there is no need to make free space - which could skew your result on a server that's been running for a while, yet has plenty of memory.