I have a MySQL server running on Ubuntu. If I restart the MySQL, everything is fine. If I however restart the OS, my queries take anywhere from 10 times to 100 times as long. The only "solution" to the problem I have found is to run optimize
on every table. Afterwards everything performs normal again. However, rebuilding the whole database after every OS reboot is obviously extremely painful and not a viable long term solution.
I reboot the OS by
- stopping all programs querying the database
- waiting for queries to finish executing
- running
stop mysql
- making sure that the server stopped with
ps -C mysql
andps -C mysqld
- running
reboot
If I leave out the OS reboot and just start up MySQL again, everything is fine.
Additional information:
- The error log doesn't contain anything pointing to the issue, nor do I get any errors. Everything is just a magnitude or two slower.
- All tables are affected. I am using InnoDB, but the very same problem existed when I was using MyISAM.
- Usually MySQL uses 80% of the 8GB RAM, but after the OS reboot only about 1GB is used. CPU usage goes down from 30-80% of one core to about 1%.
- The optimize query makes normal use of the full 80% RAM and CPU goes up to normal values, too.
Specs:
- MySQL 5.5.35-0ubuntu0.12.04.2
- Ubuntu 12.04.4 LTS
- 8GB RAM, 4 CPU cores
my.cnf
file:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
local-infile=0
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
innodb_file_per_table
innodb_autoinc_lock_mode = 0
innodb_thread_concurrency=8
innodb_buffer_pool_size=6G
innodb_log_file_size=1600M
innodb_additional_mem_pool_size=1M
innodb_log_buffer_size=4M
innodb_flush_log_at_trx_commit=0
innodb_read_io_threads=3000
innodb_write_io_threads=7000
innodb_io_capacity=10000
key_buffer_size = 1024M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
key_buffer_size is 1GB, that is about as much RAM being used after the reboot. Could that be connected?
Edit: When logging the query execution times per table, I have noticed that after repairing three of the four tables, sometimes the query times of the three repaired tables still go very high. So I am not completely sure that all tables break. Maybe it's just one table that is pulling down the performance of the whole server? However that doesn't really match the low % of used resources.
Best Answer
When you reboot the OS you remove all of the disk reads that have previously put into operating system disk cache (RAM). Once you've rebooted, the operating system will have to read the MySQL data from disk, which is several orders of magnitude slower than reading from cache (RAM).
Optimise "fixes" this as it causes MySQL to read all of the table data from disk, enabling the OS to cache the data.
Linux uses most free RAM as disk cache. You can see this using the
free
command:Restarting MySQL has no affect on this cache, as it is the operating system that has cached the data, not MySQL.
To "fix" your problem you'll have to force the OS to read the data from disk so that it will cache it. You can use the Unix
dd
tool to do this. For example:Tip: you can verify how much of a given file is in the disk cache with the
vmtouch
command. That command can also purge the file out of the filesystem cache, or force loading the file into the filesystem cache. See http://hoytech.com/vmtouch/