First all take a look at the InnoDB Architecture (courtesy of Percona CTP Vadim Tkachenko)
InnoDB
Your status for the Buffer Pool says
Buffer pool size 1310719
That's your Buffer Size in Pages. Each page is 16K. That turns out 20G - 16K.
Please note the the following: You pushed data into the InnoDB Buffer Pool. What changed ?
Buffer pool size 1310719
Free buffers 271419 (It was 347984)
Database pages 1011886 (Is was 936740)
Old database pages 373510 (It was 345808)
Modified db pages 4262 (It was 0)
Also, note the difference between the Buffer Pool Size in Pages.
1310719 (Buffer pool size) - 1011886 (Database pages) = 298833
That's 298833 InnoDB pages. How much space is that ???
mysql> select FORMAT(((1310719 - 1011886) * 16384) / power(1024,3),3) SpaceUsed;
+-----------+
| SpaceUsed |
+-----------+
| 4.560 |
+-----------+
That's 4.56GB. That space is used for the Insert Buffer Section of the InnoDB Buffer Pool (a.k.a. Change Buffer). This is used to mitigate changes to nonunique indexes into the System Tablespace File (which all have come to know as ibdata1
).
The InnoDB Storage Engine is managing the Buffer Pool's internals. Therefore, InnoDB will never surpass 62.5% of RAM. What is more, the RAM for the Buffer Pool is never given back.
WHERE IS THE 70.2% OF RAM COMING FROM ???
Look back at the output of mysqltuner.pl
at these lines
[OK] Maximum possible memory usage: 22.6G (82% of installed RAM)
Key buffer size / total MyISAM indexes: 2.0G/58.7M
[--] Total buffers: 22.2G global + 2.7M per thread (151 max threads)
mysqld has three major ways of allocating RAM
Any small spike in DB Connections will raise RAM past the 62.5% threshold you see for InnoDB.
MyISAM (Side Note)
What catches my eye is
Key buffer size / total MyISAM indexes: 2.0G/58.7M
Since you have so little indexes for MyISAM. You could set the key_buffer_size to 64M.
You do not need to restart mysql for that. Just run
SET GLOBAL key_buffer_size = 1024 * 1024 * 64;
Then, modify this in my.cnf
[mysqld]
key_Buffer_size = 64M
This will give the OS 2GB of RAM. Your VM will simply love you for it !!!
Give it a Try !!!
CAVEAT
Running FLUSH TABLES
on InnoDB tables simply closes files against the .ibd
files. This will not really push changes directly. The changes have to migrate its way through the pipes of InnoDB. This is why you see the spike in Modified db pages
. The 4262 changed pages (66.59 MB) gets flushed when InnoDB's scheduless its flush.
Best Answer
MariaDB's memory usage grows over time. It stops growing when certain configurable settings are hit. In practice, it mostly stops growing long before the 'theoretical' limit is hit.
The buffer_pool is allocated only in RAM. It is the main memory user that grows over time, stopping when it hits
innodb_buffer_pool_size
.Since the buffer_pool is a "cache", the data can be much bigger than the buffer_pool; the downside is the need to do more I/O to pull "blocks" of data (or indexes) into the cache, bumping out other blocks.
If all of the activity is concentrated on a subset of the data (eg, "recent" items), then the buffer_pool grows until "big enough" and may not actually expand to
innodb_buffer_pool_size
. I/O activity will be low.If the buffer_pool is bigger than all the InnoDB data and indexes you have, it will grow to about that size and then simply stay at that size.
The
innodb_buffer_pool_size
should not be so large that you run out of RAM. When that happens, swapping occurs. If there is no swap space, then the process is killed.