Thesql_select_db() very slow on a busy server

MySQLperformancePHP

I have a MySQL Server with about 30 Databases and 1500 Tables. Overall size of the data is 2 GB. The Server has 16 GB RAM and an AMD FX-6100 CPU. Most tables are myisam.
On average the server answers 675 queries per second, but in the afternoon it gets easily into 20k queries per second. The load of the machine barely gets over 1. So it looks like the server is just running along, singing and full of joy.

But in my PHP/Nginx stack I see over 50k cases per day where the PHP needed longer than 1 second to do its work. The other 2m PHP requests of that day finished in 40 to 150 milliseconds. I started to profile my PHP code and I can see that sometimes one call to mysql_select_db() needs over 100 milliseconds. Because mysql_select_db() gets called multiple times, this can stack up to 15 seconds wall time or more 🙁

PHP processes are running on multiple machines, but not on the MySQL server. I have up to 1000 active & persistent connections. The machines are connected by 1G Ethernet, traffic between them is 60Mbit maximum.

So I'm suspecting that I'm doing something wrong on my MySQL config. It feels like I'm driving a race car but with a locked handbrake.

This is my config:

# MyISAM #
key_buffer_size                = 1G
myisam_recover                 = FORCE,BACKUP

# SAFETY #
max_allowed_packet             = 16M
max_connect_errors             = 1000000
skip_name_resolve
sql_mode                        = NO_UNSIGNED_SUBTRACTION
innodb                         = FORCE


# CACHES AND LIMITS #
tmp_table_size                 = 32M
max_heap_table_size            = 32M
query_cache_type               = 0
query_cache_size               = 0
max_connections                = 1700
thread_cache_size              = 100
open_files_limit               = 65535
table_definition_cache         = 4096
table_open_cache               = 10000

# INNODB #
innodb_flush_method            = O_DIRECT
innodb_log_files_in_group      = 2
innodb_log_file_size           = 256M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table          = 1
innodb_buffer_pool_size        = 2G

# LOGGING #
log_error                      = /var/log/mysql/mysql-error.log
log_queries_not_using_indexes  = 1
slow_query_log                 = 1
slow_query_log_file            = /var/log/mysql/mysql-slow.log

Here are status variables about open tables:

Open_tables 5,523
Opened_tables   14 k
Table_locks_immediate   6,792 M
Table_locks_waited  6,171 k

Best Answer

As you are saying that most of the tables are MyISAM, I would check IOs and locking.

You can verify you disk throughput with a tool like iotop.

http://www.manpagez.com/man/1/iotop/

iotop

MyISAM relies on the file system cache to store data blocks, so MySQL has to compete with all the other processes running on your OS for access to that cache. The key_buffer only applies to index blocks.

http://dev.mysql.com/doc/refman/5.0/en/myisam-key-cache.html

If you notice that MySQL is generating a lot of IO activity, that may mean that your OS is starved for RAM in spite of the 16GB you say is available.

I ran into a similar problem with a Wordpress install lately that relied heavily on two 2GB MySIAM tables.

If your SELECT statements are trying to access data from these same tables that are being modified at the same time by (DML) INSERT, UPDATE or DELETE queries, that might explain why table_locks_waited is so high on your system. Those DML queries could be coming from a website, a cron job or some other process. If locking contention is the problem, you may want to consider converting the table engine to InnoDB for those tables that are causing the most locking. Tools like Jet Profiler and Innotop can help you to get that specific information.