Mysql – Slower queries after upgrade from thesql-5.5 to thesql-5.6

amazon-rdsmysql-5.6performanceupgrade

There is lots written about how MySql 5.6 is faster than MySql 5.5 (e.g. http://dev.mysql.com/tech-resources/articles/mysql-5.6.html) but we upgraded our Amazon RDS instance (using the new upgrade procedure) and found that the average time spent in the database per page view got worse, from 30ms to 50ms – a significant slow down.

This is a chart (from newrelic) of the Time spent in the database per page view (approx 1000 queries per second):
before and after mysql upgrade

The slow down seems to be on simple UPDATE statements. I wouldn't expect them to be a problem because the PK is used. This example shows a statement that took 6 seconds to execute but the explain plan is optimal (rows in table: 700k):

mysql> EXPLAIN UPDATE users SET type='dual', updated_at=NOW() where user_id=1234;
+----+-------------+---------+-------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table   | type  | possible_keys | key     | key_len | ref   | rows | Extra       |
+----+-------------+---------+-------+---------------+---------+---------+-------+------+-------------+
|  1 | SIMPLE      | users   | range | PRIMARY       | PRIMARY | 4       | const |    1 | Using where |
+----+-------------+---------+-------+---------------+---------+---------+-------+------+-------------+

updated_at=NOW() seems to be a common theme in the newly slow statements.

Is there any reasons UPDATEs are slower with MySQL 5.6? Does anyone have any clues about what I could check to debug this slow-down?


Other info: instance size: db.m2.4xlarge , query_cache is off. performance_schema is off. exact versions: was: 5.5.27, now: 5.6.16.

Best Answer

The problem was that our new RDS instance was being throttled on writeIOPS, we could see queued disk operations (DiskQueueDepth of 10 or greater) and we could see writeIOPS averaging 300, whereas Amazon only provide 100 reliable iOPS (source).

The solution was to fork out approx $200 per month for "provisioned iOPS". 1000 is the smallest you could buy (which easily covers us). After this change which took 5 hours to complete on 350G of storage (but only 30s of downtime with a multi-AZ instance) everything was fine and our MySQL 5.6 instance was running better than the original MySQL 5.5 instance like all the literature promises.

The problem was that the new RDS instance that we were given since migrating to 5.6 was just not as good as our original, which was just bad luck. We asked Amazon support "Q: Why would there be a difference in mysql5.5 than mysql5.6?" From Amazon:

Your previous underlying host may have been benefiting from unusually light use on the underlying volumes. You may have grown accustomed to very light use which allowed you to consume additional IOPs without accumulating as much DiskQueueDepth. However, when you moved to a new underlying host by upgrading you may now be experiencing more typical use of the underlying volumes. With typical use of the underlying volumes it's not recommended to consistently exceed the amount of reliable IOPs while expecting to maintain the lower DiskQueueDepth that you've grown accustomed to. However, there is no way to confirm or deny that this was the cause.