MySQL hangs with nothing in error log

MySQL

I'm having an issue where my MySQL server (part of a LAMP stack running WordPress) hangs every few hours (giving WordPress a "database connection error").

The error log shows nothing other than launching after a restart, and shutting down ("normal shutdown") when I need to restart it. The only thing of note is that it says it is "purging the queue" during a restart, and then has ~300 lines like this:

151026 19:04:52 [Warning] /usr/sbin/mysqld: Forcing close of thread 31577  user: 'root'

The only other weird behaviour I've noticed is that running mysqladmin status shows "Questions" increasing rapidly, presumably until it hangs. I've looked online and can't find much discussion about what normal range or behaviour "Questions" should exhibit. Eg, this is after 10 minutes uptime:

Uptime: 521  Threads: 3  Questions: 16667  Slow queries: 0  Opens: 107  Flush tables: 1  Open tables: 100  Queries per second avg: 31.990

I have query_cache_size set to 50M, otherwise my.cnf is default on Ubuntu 14.04. This is the performance analysis from the mysqltuner perl script:

-------- Performance Metrics -------------------------------------------------  
[--] Up for: 15m 52s (29K q [31.475 qps], 1K conn, TX: 73M, RX: 4M)  
[--] Reads / Writes: 88% / 12%  
[--] Binary logging is disabled  
[--] Total buffers: 226.0M global + 2.7M per thread (151 max threads)  
[OK] Maximum reached memory usage: 290.5M (1.81% of installed RAM)  
[OK] Maximum possible memory usage: 631.8M (3.94% of installed RAM)  
[OK] Slow queries: 0% (0/29K)  
[OK] Highest usage of available connections: 15% (24/151)  
[OK] Aborted connections: 0.00%  (0/1515)  
[OK] Query cache efficiency: 71.8% (17K cached / 24K selects)  
[OK] Query cache prunes per day: 0  
[OK] Sorts requiring temporary tables: 0% (10 temp sorts / 1K sorts)  
[!!] Temporary tables created on disk: 59% (1K on disk / 1K total)  
[OK] Thread cache hit rate: 97% (39 created / 1K connections)  
[OK] Table cache hit rate: 93% (103 open / 110 opened)  
[OK] Open file limit used: 4% (51/1K)  
[OK] Table locks acquired immediately: 100% (8K immediate / 8K locks)  

-------- MyISAM Metrics -----------------------------------------------------  
[!!] Key buffer used: 18.2% (3M used / 16M cache)  
[OK] Key buffer size / total MyISAM indexes: 16.0M/2.0M  
[OK] Read Key buffer hit rate: 100.0% (6 cached / 0 reads)  
[OK] Write Key buffer hit rate: 100.0% (8 cached / 0 writes)  

-------- InnoDB Metrics -----------------------------------------------------  
[--] InnoDB is enabled.  
[!!] InnoDB buffer pool / data size: 128.0M/292.9M  
[OK] InnoDB buffer pool instances: 1  
[!!] InnoDB Used buffer: 75.48% (6183 used/ 8192 total)  
[OK] InnoDB Read buffer efficiency: 99.70% (1960793 hits/ 1966769 total)  
[!!] InnoDB Write buffer efficiency: 0.00% (0 hits/ 1 total)  
[OK] InnoDB log waits: 0.00% (0 waits / 894 writes)  

The hanging has started happening more frequently since an increase in logged in users (people accessing the database) and changes to my Apache config, which solved Apache crashes – my Apache prefork MPM is set as follows:

StartServers              20
MinSpareServers           10  
MaxSpareServers           20  
ServerLimit               1000  
MaxRequestWorkers         500  
MaxConnectionsPerChild    10000  

The server is a DigitalOcean droplet with 8 CPU cores and 16GB RAM.

Best Answer

  • Do not have WP connecting as root. Create another 'user'

  • change innodb_buffer_pool_size to 1G.

  • Turn on the slowlog and set long_query_time = 1; you have some naughty queries that need to be investigated

  • What is MaxClients in Apache? Not more than 20, I hope.