MySQL not releasing memory

innodbmemoryMySQL

MySQL seems to want to keep an entire table in cache (table size = ~20GB) after any large inserts or select statements are performed on it. Right now my innodb buffer pool is 20GB. Total RAM is 32GB. I will provide some memory usage and output from innodb status as well as output from mysqltuner. It's been driving me nuts for the past few days. Please help! I appreciate any feedback and please let me know if you need more information.

Also, performing a 'FLUSH TABLES' just closes and re-opens them in memory. At least I think that's what is happening. Here's the innodb current memory status before I performed a bunch of inserts:

----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 21978152960; in additional pool allocated 0
Dictionary memory allocated 6006471
Buffer pool size   1310719
Free buffers       347984
Database pages     936740
Old database pages 345808
Modified db pages  0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 78031, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 551887, created 384853, written 4733512
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
No buffer pool page gets since the last printout
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 936740, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]

mysqld percent memory usage: 60.9%

mysqld percent memory usage after inserts (1 mil records): 63.3%

and then after more inserts (3 mil records): 70.2%

shouldn't it cap out at about 62.5%? (20/32GB) total ram?

output from top sorting my MEM usage:

top - 14:30:56 up 23:25,  3 users,  load average: 3.63, 2.31, 1.91
Tasks: 208 total,   4 running, 204 sleeping,   0 stopped,   0 zombie
Cpu(s): 96.0%us,  3.0%sy,  0.0%ni,  0.0%id,  1.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  28821396k total, 28609868k used,   211528k free,   138696k buffers
Swap: 33554428k total,    30256k used, 33524172k free,  1208184k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1228 mysql     20   0 25.1g  19g 5512 S   31 70.2  62:01.10 mysqld

here's the innodb memory output after these inserts were performed:

----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 21978152960; in additional pool allocated 0
Dictionary memory allocated 6006471
Buffer pool size   1310719
Free buffers       271419
Database pages     1011886
Old database pages 373510
Modified db pages  4262
Pending reads 1
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 82521, not young 0
7.08 youngs/s, 0.00 non-youngs/s
Pages read 585218, created 426667, written 5192189
24.08 reads/s, 53.08 creates/s, 1135.07 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 1011886, unzip_LRU len: 0
I/O sum[0]:cur[266], unzip sum[0]:cur[0]

According to the innodb status, the total memory allocated is the same– yet my OS (Virtual Ubuntu Server 12.04) is reporting more memory usage than that. The memory use stays the same and here I am defining it as the MySQL service not 'releasing' memory. Any suggestions?

output from mysqltuner.pl:

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM
[--] Data in MyISAM tables: 226M (Tables: 287)
[--] Data in InnoDB tables: 33G (Tables: 1000)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[--] Data in MEMORY tables: 0B (Tables: 1)
[!!] Total fragmented tables: 959

-------- Security Recommendations  -------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics -------------------------------------------------
[--] Up for: 23h 14m 27s (1M q [14.603 qps], 6K conn, TX: 16B, RX: 1B)
[--] Reads / Writes: 46% / 54%
[--] Total buffers: 22.2G global + 2.7M per thread (151 max threads)
[OK] Maximum possible memory usage: 22.6G (82% of installed RAM)
[OK] Slow queries: 0% (6/1M)
[OK] Highest usage of available connections: 6% (10/151)
[OK] Key buffer size / total MyISAM indexes: 2.0G/58.7M
[OK] Key buffer hit rate: 100.0% (216M cached / 38K reads)
[OK] Query cache efficiency: 81.2% (799K cached / 984K selects)
[!!] Query cache prunes per day: 5561
[OK] Sorts requiring temporary tables: 4% (819 temp sorts / 16K sorts)
[!!] Temporary tables created on disk: 27% (6K on disk / 22K total)
[OK] Thread cache hit rate: 99% (11 created / 6K connections)
[!!] Table cache hit rate: 0% (97 open / 10K opened)
[OK] Open file limit used: 12% (129/1K)
[OK] Table locks acquired immediately: 99% (433K immediate / 433K locks)
[!!] InnoDB  buffer pool / data size: 20.0G/33.6G
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Enable the slow query log to troubleshoot bad queries
    When making adjustments, make tmp_table_size/max_heap_table_size equal
    Reduce your SELECT DISTINCT queries without LIMIT clauses
    Increase table_cache gradually to avoid file descriptor limits
    Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C
Variables to adjust:
    query_cache_size (> 128M)
    tmp_table_size (> 128M)
    max_heap_table_size (> 16M)
    table_cache (> 431)
    innodb_buffer_pool_size (>= 33G)

Best Answer

First all take a look at the InnoDB Architecture (courtesy of Percona CTP Vadim Tkachenko)

InnoDB Architecture

InnoDB

Your status for the Buffer Pool says

Buffer pool size 1310719

That's your Buffer Size in Pages. Each page is 16K. That turns out 20G - 16K.

Please note the the following: You pushed data into the InnoDB Buffer Pool. What changed ?

Buffer pool size   1310719 
Free buffers       271419 (It was 347984)
Database pages     1011886 (Is was 936740)
Old database pages 373510 (It was 345808)
Modified db pages  4262 (It was 0)

Also, note the difference between the Buffer Pool Size in Pages.

1310719 (Buffer pool size) - 1011886 (Database pages) = 298833

That's 298833 InnoDB pages. How much space is that ???

mysql> select FORMAT(((1310719  - 1011886) * 16384) / power(1024,3),3) SpaceUsed;
+-----------+
| SpaceUsed |
+-----------+
| 4.560     |
+-----------+

That's 4.56GB. That space is used for the Insert Buffer Section of the InnoDB Buffer Pool (a.k.a. Change Buffer). This is used to mitigate changes to nonunique indexes into the System Tablespace File (which all have come to know as ibdata1).

The InnoDB Storage Engine is managing the Buffer Pool's internals. Therefore, InnoDB will never surpass 62.5% of RAM. What is more, the RAM for the Buffer Pool is never given back.

WHERE IS THE 70.2% OF RAM COMING FROM ???

Look back at the output of mysqltuner.pl at these lines

[OK] Maximum possible memory usage: 22.6G (82% of installed RAM)
Key buffer size / total MyISAM indexes: 2.0G/58.7M
[--] Total buffers: 22.2G global + 2.7M per thread (151 max threads)

mysqld has three major ways of allocating RAM

Any small spike in DB Connections will raise RAM past the 62.5% threshold you see for InnoDB.

MyISAM (Side Note)

What catches my eye is

Key buffer size / total MyISAM indexes: 2.0G/58.7M

Since you have so little indexes for MyISAM. You could set the key_buffer_size to 64M.

You do not need to restart mysql for that. Just run

SET GLOBAL key_buffer_size = 1024 * 1024 * 64;

Then, modify this in my.cnf

[mysqld]
key_Buffer_size = 64M

This will give the OS 2GB of RAM. Your VM will simply love you for it !!!

Give it a Try !!!

CAVEAT

Running FLUSH TABLES on InnoDB tables simply closes files against the .ibd files. This will not really push changes directly. The changes have to migrate its way through the pipes of InnoDB. This is why you see the spike in Modified db pages. The 4262 changed pages (66.59 MB) gets flushed when InnoDB's scheduless its flush.