Mysql 8.0.21 long lived prepared statements have a memory leak ? or are we doing something weird

innodbmemoryMySQLmysql-5.6mysql-8.0

— updated the question because it was easy to initially assume I was asking for mysql tuning tips, although some of the tips did help me narrow down the problem —

— further update. After doing only a partial upgrade on some servers, we found the problem is occurring on MySQL 5.7 as well, so guessing it was some sort of change in behaviour between 5.6 and 5.7. It's still a bit weird to us how it didn't cause a problem in MySQL 5.6 —

We recently upgraded from MySQL 5.6 to MySQL 8.0 on a few servers, one of the servers was fine, and has had no problems, but it
has significantly less load than one of our other servers which has been running out of memory.

Our server launches, then grabs 300 connections, and keeps them open with a C3P0 pool to the mysql server.

Over 4 days, this particular server accumulated 3500+ opened prepared statements, some of them taking more than 300 Mb in memory/sql/Prepared_statement::main_mem_root
It's normal for our server profile to have the connections / prepared statements open, and hasn't been a problem on mysql 5.6 or 5.7

looking that up at mysql https://dev.mysql.com/doc/dev/mysql-server/latest/classPrepared__statement.html#a999eb56b28a5202c0fb1f5df96db3c74

I can see it's somehow related to prepared statements, but 'allocate parsed tree elements (instances of Item, SELECT_LEX and other classes).' doesn't tell me much, is it caching results? is that how it's growing ?

we're using connector/j 8.0.18 I've looked at the release notes for 8.0.18 -> 8.0.23 and there isn't any obvious memory leak fixes

our connection parameters include

cachePrepStmts", "true");
useServerPrepStmts", "true");

We were running these servers on AWS on MySQL 5.6 with the same overridden parameters on 8GB of RAM, when we upgraded to MySQL 8.0.21 we started running out of RAM in about 1 day. We grew the server to 32Gb but didn't change the parameters. It's gone over 15 GB used and still going up.

We're pretty sure it's related to the per connection thread memory, but not sure why.

Looking at memory_by_thread_by_current_bytes we have connections with over 200Mb

The server is running 8.0.21 on AWS RDS m4.large

I've audited the code and all resultsets are being closed. the prepared statements are in code as well, but the cache behavior above is keeping them open.

normally the innodb buffer pool is 4GB, but we dropped it to 3GB when we were on the 8GB machine for a bit more room.

show global status / memory_summary_by_thread_by_event_name / MySQL tuner run / show engine innodb status / show variables / sys.memory_by_thread_by_current_bytes / sys.memory_global_by_current_bytes
https://gist.github.com/DaveB93/138f6bac254fee5bbbbb7ce2af7c2fef

— update —

we 'fixed' this by connecting to our application via JMX and changing the C3P0 connection pool settings to "maxIdleTimeExcessConnections" to 300 (up from 0)

This cleaned up all of the long lived connections, and freed up 10GB of ram, This doesn't seem like the long term solution I want though.

Best Answer

The possible reason for RAM consumption can be here:
[!!] Temporary tables created on disk: 85% (249K on disk / 290K total)
When automatic temporary table is created it is always created with ENGINE=MEMORY first. If it doesn't fit the max_heap_table_size = 134217728 limit, it will be transferred on disk for further proceeding. Temptables are associated with session/connection so each thread can create a number of 128MB large temptables in the RAM - in addition to the buffers allocated for reads/joins/sorts.

I suggest to decrease max_heap_table_size to some reasonable value like 16M as far as in 85% cases temptables don't fit 128M anyway. Another suggestion is to optimize your queries having longest execution times (due to the slow on-disk temptables operations). Slow queries log is a good starting point.

Related Question