MySQL 5.5 Runs Out of Memory, Drops All Connections When Creating Many Databases

amazon-rdsinnodbMySQLmysql-5.5

I'm running into a problem when I create a large number of databases consecutively on an Amazon RDS db.m1.large instance. The MySQL Engine Version is 5.5.23. At some point, the RDS instance drops all connections and refuses any new ones for a period of about 10-20 minutes.

The database engine is InnoDB. Each database is exactly the same and is created from a MySQL Dump file. There are 65 tables, each is empty aside from one or two tables that have a row or two, so the data size is very small. Foreign key constraints are turned off before the tables are created and turned back on afterwards for each database created.

The following pseudo code essentially describes what I am doing:

For # of databases: // around 1000-5000 usually results in a crash
   CREATE DATABASE foo_num CHARACTER SET utf8 COLLATE utf8_bin; // done via Python Django connection
   mysql -u USER -h HOST --password=PASSWORD DBNAME < dbcreate.sql

The dbcreate.sql looks similar to:

SET FOREIGN_KEY_CHECKS=0;
DROP TABLE IF EXISTS `FooTable1`;
CREATE TABLE `FooTable1` (
   ...
)
...
DROP TABLE IF EXISTS `FooTable65`;
CREATE TABLE `FooTable65` (
   ...
)
SET FOREIGN_KEY_CHECKS=1;

Here is a snapshot showing the connections dropping.

Connections Drop

I noticed, however, that the CPU is not pinning or anything before, during or after the connections drop:

CPU looks fine...

However, I did notice the read throughput spikes during the connection drop:

Read throughput drops

Update 1

Okay, so as Rolando suggested, it may be that RAM is the limitation. The instance I am using has ~7GB of RAM available and as I was looking through the monitoring graphs I noticed the freeable memory metric was around the 7GB mark at the same time the connections were dropped. See the attached chart:

Freeable memory peaks at connection drop times

The question is, why is there this sudden spike in RAM usage and can it be avoided without having to scale up the hardware?

Best Answer

The first thing that comes to mind is the Server Model : db.m1.large

What limiting factors are placed on a MySQL RDS?

If you spin up an Amazon RDS instance of MySQL, you would subject yourself to whatever constraints are given. All models of MySQL Amazon RDS have the same major options but differ in only two aspects

  • max_connections
  • innodb_buffer_pool_size

Here is a Chart I posted Last Month

MODEL      max_connections innodb_buffer_pool_size
---------  --------------- -----------------------
t1.micro   34                326107136 (  311M)
m1-small   125              1179648000 ( 1125M,  1.097G)
m1-large   623              5882511360 ( 5610M,  5.479G)
m1-xlarge  1263            11922309120 (11370M, 11.103G)
m2-xlarge  1441            13605273600 (12975M, 12.671G)
m2-2xlarge 2900            27367833600 (26100M, 25.488G)
m2-4xlarge 5816            54892953600 (52350M, 51.123G)

I posted this in my other posts in the DBA StackExchange

You are using m1.large. Since the InnoDB Buffer Pool is 3/4 of the Instance. That's means you have 7.2G but only 1.8 is usable. That model is capable of having up to 623 connections. Each connection can consume memory because of

  • join buffers
  • sort buffers
  • read buffers
  • thread info

Amazon RDS is simply micromanaging resources. Since DB Connections can consume RAM, connections are probably being disallowed due to the lack of RAM needed.

SUGGESTIONS

  • Try reducing InnoDB log IO innodb_flush_log_at_trx_commit=0 during the mass creation
  • Make sure you are not doing large transactions during any automatic backups or snapshots
  • Try a bigger server model m1.xlarge or m2.xlarge