Mysql – Struggling to debug high CPU usage on Amazon RDS MySQL instance

MySQL

We're running an m1.xlarge MySQL RDS server and having some issues with high CPU usage. We had some problems a couple of weeks ago with CPU utilization reaching 100% on a large instance. When we upgraded the size to xlarge that stabilised things for a while but CPU usage gradually crept up again.

For the last week or so CPU utilization has been in the high 90s, reaching 100% or thereabouts consistently yesterday, which ground our production site to a halt. After rebooting the db server, within hours CPU usage climbed back up to the same levels.

I've run show processlist on the mysql server, and have been monitoring the same through MySQL admin. There doesn't seem to be any particularly long-running queries or a high volume of queries either. There's a couple of processes lying in sleep state for a long time… these are isolated workers daemons running outside of our main app which communicate with the database. I've copied in the processlist output below with server names changed to give a description of what they are:

+------+----------+---------------------------------------------------+--------------+---------+-------+--------------+----------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+----------+---------------------------------------------------+--------------+---------+-------+--------------+----------------------------------------------------------------------------------------+
| 13 | rdsadmin | localhost:43513 | mysql | Sleep | 14 | | NULL |
| 15 | proddbuser | app-server-1.eu-west-1.compute.internal:36460 | proddb | Sleep | 46 | | NULL |
| 451 | proddbuser | app-server-1.eu-west-1.compute.internal:55512 | proddb | Sleep | 29 | | NULL |
| 912 | proddbuser | app-server-1.eu-west-1.compute.internal:45171 | proddb | Sleep | 13 | | NULL |
| 941 | proddbuser | app-server-1.eu-west-1.compute.internal:47353 | proddb | Sleep | 53 | | NULL |
| 951 | proddbuser | app-server-1.eu-west-1.compute.internal:48014 | proddb | Sleep | 37 | | NULL |
| 1009 | proddbuser | app-server-1.eu-west-1.compute.internal:51787 | proddb | Sleep | 36 | | NULL |
| 1041 | proddbuser | app-server-1.eu-west-1.compute.internal:53777 | proddb | Sleep | 14 | | NULL |
| 1572 | proddbuser | app-server-1.eu-west-1.compute.internal:42989 | proddb | Sleep | 3 | | NULL |
| 1592 | proddbuser | app-server-1.eu-west-1.compute.internal:43279 | proddb | Sleep | 162 | | NULL |
| 2909 | proddbuser | app-server-1.eu-west-1.compute.internal:37768 | proddb | Sleep | 35 | | NULL |
| 3028 | proddbuser | app-server-1.eu-west-1.compute.internal:42568 | proddb | Sleep | 5 | | NULL |
| 3119 | proddbuser | app-server-1.eu-west-1.compute.internal:46913 | proddb | Sleep | 76 | | NULL |
| 3189 | proddbuser | app-server-1.eu-west-1.compute.internal:51466 | proddb | Sleep | 5 | | NULL |
| 3216 | proddbuser | app-server-2.eu-west-1.compute.internal:44097 | proddb | Sleep | 14552 | | NULL |
| 3218 | proddbuser | app-server-2.eu-west-1.compute.internal:44099 | proddb | Sleep | 14552 | | NULL |
| 3219 | proddbuser | app-server-2.eu-west-1.compute.internal:44107 | proddb | Sleep | 44 | | NULL |
| 3220 | proddbuser | app-server-2.eu-west-1.compute.internal:44113 | proddb | Sleep | 26 | | NULL |
| 3223 | proddbuser | app-server-2.eu-west-1.compute.internal:44184 | proddb | Sleep | 50 | | NULL |
| 3224 | proddbuser | app-server-2.eu-west-1.compute.internal:44187 | proddb | Sleep | 1 | | NULL |
| 3226 | proddbuser | app-server-2.eu-west-1.compute.internal:44208 | proddb | Sleep | 33 | | NULL |
| 3229 | proddbuser | app-server-2.eu-west-1.compute.internal:44250 | proddb | Sleep | 14 | | NULL |
| 3232 | proddbuser | app-server-2.eu-west-1.compute.internal:44279 | proddb | Sleep | 26 | | NULL |
| 3233 | proddbuser | app-server-2.eu-west-1.compute.internal:44297 | proddb | Sleep | 31 | | NULL |
| 3237 | proddbuser | app-server-2.eu-west-1.compute.internal:44334 | proddb | Sleep | 27 | | NULL |
| 3239 | proddbuser | app-server-2.eu-west-1.compute.internal:44338 | proddb | Sleep | 11 | | NULL |
| 3241 | proddbuser | app-server-2.eu-west-1.compute.internal:44356 | proddb | Sleep | 26 | | NULL |
| 3260 | proddbuser | app-server-2.eu-west-1.compute.internal:44619 | proddb | Sleep | 8 | | NULL |
| 3337 | proddbuser | utility-server-1.eu-west-1.compute.internal:45193 | proddb | Query | 0 | Sending data | SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 309416 LIMIT 1 |
| 3419 | proddbuser | utility-server-1.eu-west-1.compute.internal:46136 | proddb | Query | 0 | Sending data | SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 284530 LIMIT 1 |
| 3463 | proddbuser | app-server-1.eu-west-1.compute.internal:59619 | proddb | Sleep | 9406 | | NULL |
| 3504 | proddbuser | utility-server-1.eu-west-1.compute.internal:47063 | proddb | Query | 0 | Sending data | SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 260571 LIMIT 1 |
| 3577 | proddbuser | app-server-1.eu-west-1.compute.internal:34394 | proddb | Sleep | 6734 | | NULL |
| 3585 | proddbuser | utility-server-1.eu-west-1.compute.internal:47990 | proddb | Query | 0 | Sending data | SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 231273 LIMIT 1 |
| 3664 | proddbuser | utility-server-1.eu-west-1.compute.internal:48909 | proddb | Query | 0 | Sending data | SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 201525 LIMIT 1 |
| 3716 | proddbuser | app-server-2.eu-west-1.compute.internal:56301 | proddb | Sleep | 27 | | NULL |
| 3748 | proddbuser | utility-server-1.eu-west-1.compute.internal:49850 | proddb | Query | 0 | Sending data | SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 167839 LIMIT 1 |
| 3771 | proddbuser | my-pc:30101 | NULL | Query | 0 | NULL | show processlist |
| 3831 | proddbuser | utility-server-1.eu-west-1.compute.internal:50785 | proddb | Query | 0 | Sending data | SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 123228 LIMIT 1 |
+------+----------+---------------------------------------------------+--------------+---------+-------+--------------+----------------------------------------------------------------------------------------+

I should also say traffic on the site is extremely low during this period, relative to normal peak hours, around 10% of the load we see at peak times.

We also have new relic monitoring which shows us what the most time consuming app database calls are. It shows us that one particular call that accounts for 99% of the time our app spends in the db is a simple find by id query like this:

SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`id` = 123 LIMIT 1

(not quite the same as the queries that were running in the above processlist)

This operation has got slower over the last week or so, with the standard deviation between time requests take increasing, and also the maximum amount of time it takes being measured in terms of seconds. I think this is just a result of the CPU utilization problems rather than a cause.

This table has around 80,000 rows so isn't huge. It is expected that most of the apps time in the database is spent looking up records in this table, the main functionality of the app is based around this. I've run a similar query myself from my app server to the production database a few times, while CPU usage remains around 100%, and it responds within 1 or 2 ms.

Based on all the above, we're not sure how to proceed with our debugging. Just wondered if anyone had any ideas what kind of things might be a root cause and how to investigate these? Access to the underlying server running our db server is limited since its an Amazon RDS instance.

Best Answer

Managed to solve this, these are the steps I followed:

Firstly, I contacted the Amazon RDS team by posting on their discussion forum, they confirmed it was the mysqld process taking up all this CPU - this eliminated a configuration fault with something else running on the physical server

Secondly I tracked down the source of the queries that were running:

SELECT `mytable`.* FROM `mytable` WHERE `mytable`.`foreign_key` = 231273 LIMIT 1 

I originally overlooked this as the cause, because none of these queries seemed to be taking particularly long when I monitored the show processlist output. After exhausting other avenues, I decided it might be worth following up....and I'm glad I did.

As you can see in the show processlist output, these queries were coming from a utlility server, which runs some tactical utility jobs that exist outside of our main application code. This is why they were not showing up as slow or causing issues in our new relic monitoring, because the new relic agent is only installed on our main app server.

Loosely following this guide:

http://www.mysqlperformanceblog.com/2007/02/08/debugging-sleeping-connections-with-mysql/

I was able to trace these queries to a specific running process on our utility server box. This was a bit of ruby code that was very inefficiently iterating through around 70,000 records, checking some field values and using those to decide whether it needs to create a new record in 'mytable.' After doing some analysis I was able to determine, the process was no longer needed so could be killed.

Something that was making matters worse, there seemed to be 6 instances of this same process running at one time due to the way the cron job was configured and how long each one took! I killed off these processes, and incredibly our CPU usage fell from around 100% to around 5%!