(MySQL Version 5.5.58)
I have two tables articles
and comments
. articles
has 1K rows, while comments
has 100K rows:
CREATE TABLE `articles` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`text` text COLLATE utf8_unicode_ci,
`type` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
)
CREATE TABLE `comments` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`text` mediumtext COLLATE utf8_unicode_ci,
`result` varchar(255) COLLATE utf8_unicode_ci,
`article_id` int(11) DEFAULT '0',
`user_id` int(11) DEFAULT '0',
`spec_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index_comments_on_result` (`result`),
KEY `index_comments_on_user_id` (`user_id`),
KEY `index_comments_on_article_id` (`article_id`),
KEY `index_comments_on_spec_id` (`spec_id`),
)
I have a "select" query from those tables:
SELECT articles.*,
count(DISTINCT CASE
WHEN s.result = 'OK' THEN s.user_id
END) user_ok,
count(DISTINCT s.user_id) user_cnt,
count(CASE
WHEN s.result = 'OK' THEN 1
END) com_ok,
count(s.id) com_cnt,
bit_or(s.result = 'OK'
AND s.user_id = 1) cur_user_ok,
bit_or(s.user_id = 1) cur_user_all
FROM articles
LEFT JOIN comments s ON s.article_id = articles.id AND s.spec_id IS NULL
GROUP BY articles.id
ORDER BY articles.id ASC
This query will finish in ~100ms normally.
However, after 2~3 days of uptime, the query time will suddenly increase to about 200 seconds. This will usually last 4~6 hours before it gets back to its normal state (~100ms). If I restart MySQL in the period, there is a chance (about 50%) to solve the slowdown (temporary).
I have a backup routine run 3 times a day, which mysqldump
all databases. I noticed that the moment the query time increases is almost always after the backup routine finished. But if I disable the backup routine, it still occurs.
The memory usage (RSS reported by ps) always stayed at about 500MB, and didn't seems to increase over time or differ in the slowdown period. Yet Handler_read_next
increases a lot in the slowdown period: normally it is about 80K, but during the period it is 100M. SHOW PROFILES
indicates that most of the time are spent in "Copying to tmp table".
Here's my my.cnf
settings:
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
key_buffer_size = 256M
table_open_cache = 2000
sort_buffer_size = 4M
read_buffer_size = 1M
innodb_file_per_table = 1
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 2
innodb_log_file_size = 256M
tmp_table_size = 128M
max_heap_table_size = 512M
query_cache_size = 0
query_cache_type = 0
What are the possible reasons?
Best Answer
The InnoDB buffer pool is essentially a huge cache. (A variant of LRU - 'Least Recently Used'). If your working set data fits into that cache, then
SELECT
queries will usually be relatively fast. On the other hand, if the working set data doesn't fit into that cache, then MySQL will have to retrieve some of the data from disk (or whichever storage medium is used), and this is significantly slower.Running
mysqldump
can bring huge amounts of otherwise unused data into the buffer pool, and at the same time the (potentially useful) data that is already there will be evicted and flushed to disk.There are ways to avoid or minimise this problem:
mysqldump
and similar faster). Both these variables are dynamic, so they can be given special values just before you run mysqldump, and then restored to the original values once it has completed. For details, see Making the Buffer Pool Scan Resistant.mysqldump
or other tools that "pollute" the buffer pool and then restore it afterwards.mysqldump
backup method with Percona Xtrabackup or another physical backup tool that doesn't access the InnoDB buffer pool as such. Physical backup methods are also faster, and can be less disruptive than mysqldump. The disadvantage is that you'll need the exact same MySQL version and configuration on the system where the backup is restored.