Thesql service takes too much memory

MySQL

i have a mysql 5.6.19 on linux server, and it takes about 8 Gb of memory. That is much more than i expect.

There are about 150 db, each with 700 tables. All of them are innodb. We do not care about their safety.

SHOW ENGINE INNODB STATUS – Total memory allocated 4395630592.
So i can't understand why mysql takes additional 3.5 Gb, any suggestions ?

my.cnf    
innodb_log_buffer_size = 100M    
innodb_log_file_size=1000M    
innodb_flush_log_at_trx_commit=0    
innodb_open_files=50000  
innodb_flush_method=O_DIRECT_NO_FSYNC  
innodb_force_recovery=0  
innodb_buffer_pool_size=4G  
innodb_flush_log_at_timeout=5  
innodb_stats_on_metadata=OFF  
innodb_checksums=OFF  
innodb_thread_concurrency=0  
innodb_read_io_threads=64  
innodb_write_io_threads=64  
innodb_file_per_table=0  
innodb_sort_buffer_size=2M  
max_connections=400  
tmp_table_size=20M  
max_heap_table_size=20M  
innodb_ft_total_cache_size=64000000  
innodb_ft_result_cache_limit=100000000  

interactive_timeout=1500  
wait_timeout = 300  

open_files_limit=50000  
table_open_cache=20000  
table_definition_cache=20000  

Best Answer

There are many things that are wrong with your configuration and running status:

With your configuration, before running no query, you need 1.5 GB of static memory assignment- before the buffer pool is even used an no connection is performed. Most of it (1GB) goes to performance_schema, but there is still 500MB that are allocated (vs the 42k for the default configuration).

You also have quite some high number of temporary tables created (180 tables/s), half of them in memory -I would bet that the other 90 are created in memory, then on disk. That can account, in your setup, to up to 1.8 GB of temporary tables per second.

Your queries have a non-negligible amount of full joins and full table scans, reading randomly 26K rows per second and sorting 3K per second. In summary- you end up wasting lots of resources in read buffers, join buffers and sort buffers. And that is for 90 simultaneous connections. 90 is not a high number for MySQL, but you need more resources for the type of queries that you are executing- which examine lots of rows.

You say that you have 100K tables, which usually affects the InnoDB dictionary size (146M) and the open_table_cache. You have increased some of the caches, but I do not agree that that is the right way to proceed.

Aside from that, you have lots of connections using SHOW STATUS commands, typical from an ORM/framework. Not worrying in 5.6, where metadata commands do not update the InnoDB statistics automatically, but from that I can derive that the queries you are executing are not the most optimal ones.

So, my recommendations:

  • Turn down most of the memory-related, session-only or query-only variables (e.g. tmp_table_size)
  • Turn down table-cache related variables
  • Reduce the concurrency by setting up a pool of connections (or reconfigure the current one). This will allow you to turn down max_connections without affecting the application throughput.

These ones will increase latency, but that is needed in order to increase the concurrency. If you want more concurrency with the same memory resources you will have to split your tables on several servers (aka sharding). In general, there are many things that I do not like from your configuration file, but I am not bold enough to tell you to change them without evaluating them myself.

  • Turn off performance_schema (it will save you 1GB of memory for your configured number of connections)
  • Fix your queries and start using indexes. Or create the appropriate indexes. This is probably the first step.
  • Avoid using MySQL- Without seeing your queries, something tells me that most of them could be easily cached at application or middleware layer. 5.6 even comes with a free memcached server!
  • Watch for the rollbacks you are executing (1 was running at the time of the statistics gathering). Rollbacks are very expensive resource-wise (memory and CPU).

For more detailed information, I would need to perform a query audit.