First all take a look at the InnoDB Architecture (courtesy of Percona CTP Vadim Tkachenko)
InnoDB
Your status for the Buffer Pool says
Buffer pool size 1310719
That's your Buffer Size in Pages. Each page is 16K. That turns out 20G - 16K.
Please note the the following: You pushed data into the InnoDB Buffer Pool. What changed ?
Buffer pool size 1310719
Free buffers 271419 (It was 347984)
Database pages 1011886 (Is was 936740)
Old database pages 373510 (It was 345808)
Modified db pages 4262 (It was 0)
Also, note the difference between the Buffer Pool Size in Pages.
1310719 (Buffer pool size) - 1011886 (Database pages) = 298833
That's 298833 InnoDB pages. How much space is that ???
mysql> select FORMAT(((1310719 - 1011886) * 16384) / power(1024,3),3) SpaceUsed;
+-----------+
| SpaceUsed |
+-----------+
| 4.560 |
+-----------+
That's 4.56GB. That space is used for the Insert Buffer Section of the InnoDB Buffer Pool (a.k.a. Change Buffer). This is used to mitigate changes to nonunique indexes into the System Tablespace File (which all have come to know as ibdata1
).
The InnoDB Storage Engine is managing the Buffer Pool's internals. Therefore, InnoDB will never surpass 62.5% of RAM. What is more, the RAM for the Buffer Pool is never given back.
WHERE IS THE 70.2% OF RAM COMING FROM ???
Look back at the output of mysqltuner.pl
at these lines
[OK] Maximum possible memory usage: 22.6G (82% of installed RAM)
Key buffer size / total MyISAM indexes: 2.0G/58.7M
[--] Total buffers: 22.2G global + 2.7M per thread (151 max threads)
mysqld has three major ways of allocating RAM
Any small spike in DB Connections will raise RAM past the 62.5% threshold you see for InnoDB.
MyISAM (Side Note)
What catches my eye is
Key buffer size / total MyISAM indexes: 2.0G/58.7M
Since you have so little indexes for MyISAM. You could set the key_buffer_size to 64M.
You do not need to restart mysql for that. Just run
SET GLOBAL key_buffer_size = 1024 * 1024 * 64;
Then, modify this in my.cnf
[mysqld]
key_Buffer_size = 64M
This will give the OS 2GB of RAM. Your VM will simply love you for it !!!
Give it a Try !!!
CAVEAT
Running FLUSH TABLES
on InnoDB tables simply closes files against the .ibd
files. This will not really push changes directly. The changes have to migrate its way through the pipes of InnoDB. This is why you see the spike in Modified db pages
. The 4262 changed pages (66.59 MB) gets flushed when InnoDB's scheduless its flush.
In addition to what was already stated, I took a look at the amount of freeable RAM on the instance. My RAM useage was also very good, with plenty of usable RAM.
The waits shown in the images are log file related -- the only thing that really should affect those is your disk IO. So begun the hunt: my CloudWatch metrics were showing a high number of writes (about 230 per second) and a very low number of reads (1 to 2 per second). That was somewhat of a surprise, but it turned out there was some index rebuilding that was being done on a table with over 5 million rows in it.
From this point, I started wondering what my limit on IOPS was for my RDS instance. As it turned out, I was using general storage, not provisioned IOPS, and in AWS General Storage IOPS baseline is about 3 IOPS per GB. Since my database was large but not enormous, I had only selected 50GB of General SSD storage. That means that my disk operations were limited to only about 150 IOPS for baseline. There are burst IOPS, up to 3000 IOPS for about 30-35 minutes, but I had already depleted this and wasn't aware that I needed to monitor these credits in CloudWatch to know when I might start getting throttled.
In the end, I increased the size of my RDS Storage and the problem immediately subsided.
Best Answer
There isn't much you can do about this with RDS. It means the OS decided that whatever got swapped out was less valuable than more page cache. You have no control over this with huge pages or vm.swappiness controls like you would on an EC2 instance.