Mysql – Monitoring slow queries on large database (Percona 5.6)

monitoringMySQLperconaslow-log

I am looking for a solution to monitor very large production database (many replicas, over 200 tables, some of tables have 10-50 GB of data) with a lot of traffic (a wide set of query complexity).

The main goal is to keep an eye on heavy queries after application code change. We have experienced that new code introduced non-optimal sql queries.

The database dump is very huge so we use a sample on staging environments only. It is almost impossible to detect performance issue on staging in this case.

I thought about using something like New Relic but for such big infrastructure costs are too high and I am not sure it affects performance in a negative way.
On the other I can use MySQL slow log. I enabled it on some replicas with a log rate limit. After one month I do not see I/O speed impact but I am not sure about enabling it for all replicas and the master database.
I addition I found this article http://blog.symedia.pl/2016/10/performance-impact-general-slow-query-log.html and it says something different that I read in the past.

What would you do to monitor large database?

Best Answer

You can use pt-query-digest --processlist

While there are four different ways to use it, IMHO the best way to use it is to poll the processlist and collect metrics every hour. This method does not need a slow query log.

I have suggested this before (See mysql 5.7 general_log table has user_host but the file does not)

I have promoted this idea many times over the years.