Coming at this from the DB point of view, rather than WordPress.
Executive Summary
For each recommendation your optimiser script has generated, check out the appropriate MySQL documentation to see if it will impact your setup. Then tweak your my.conf
little by little.
Your overall objective is for MySQL to load as much as it can into memory and not hit the disk. So increasing various caches will probably help. As long as you don't exceed the physical memory in your server (you need to take into account other processes running on the server). Then, make sure your tables are indexed appropriately.
Your Specific Issues
But to address each of the items you've listed:
Query cache is supported but not enabled; Perhaps you should set the query_cache_size
The query cache is like a big hash table of Select-Statement -> Result-Table. MySQL checks to see if the same query exists in the cache and returns the cached result without re-running the query. If you enable it, run SHOW VARIABLES
to see how well utilised it is (having a huge cache which isn't used is just a waste of memory). My experience of the query cache is that it seems really good in theory, but didn't provide much help in practice.
You have had 11025 queries where a join could not use an index properly; You should enable "log-queries-not-using-indexes" then look for non indexed joins in the slow query log.
Indexes are what makes or breaks a database. Enable the slow query log and use EXPLAIN
to see what queries are not using indexes. Fix the slow ones first.
You have a total of 834 tables; You have 528 open tables; Current table_cache hit rate is 8%, while 132% of your table cache is in use; You should probably increase your table_cache
MySQL ISAM databases exist as pairs (I think) of files on disk, but MySQL doesn't keep them open all the time, only ones used recently. The table_cache setting controls how many files it will keep open. This appears to be related to the number of connections, so be careful setting this too high, but it seems you have stacks of memory available so increase this until all tables are cached.
Current Lock Wait ratio = 1 : 529; You may benefit from selective use of InnoDB.
MyISAM tables are great for reading, but whenever you UPDATE
, INSERT
or DELETE
from them MySQL locks the whole table. So if, for example, you have a Page
table with a HitCount
field which is incremented whenever the page is loaded, the entire Page
table is locked and no other connections can read from it. I've seen some particular nasty combinations of read / write queries which would lock tables for minutes or even hours! Effectively killing the site.
InnoDB isn't as fast at reading, but supports more granular write operations (only locking the one record being updated). So is a better fit for tables which are written to more frequently. Converting tables with large numbers of UPDATE
, INSERT
and DELETE
operations to InnoDB may decrease locking and increase performance. Many apps which use MySQL default to InnoDB across the board for this very reason.
I think you need to drop the old MyISAM tables and re-create them as InnoDB, so there will be downtime involved.
Apparently an ALTER TABLE
statement is all you require to change the engine type. Although full table locks will be required, so you'll have some time when you can't run queries.
I don't know if WordPress assumes InnoDB or MyISAM tables. Please check WordPress before altering the tables' engine.
Current max_heap_table_size = 16 M; Current tmp_table_size = 16 M; Of 107071 temp tables, 25% were created on disk; Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables
MySQL requires memory to do sorting (ORDER BY), JOINs and other operations involving large chunks of data, but only up to a certain limit, after that limit the operation spills out onto disk (so that one giant ORDER BY doesn't use up gigabytes of memory which could better be spent doing other things). Increasing max_heap_table_size
and tmp_table_size
means less operations run on the slow disk and more in fast memory (a discussion about these variables). 64M should be large enough for most cases, and making these too big means you're just wasting memory.
You have 1290 out of 1145245 that take longer than 2.000000 sec. to complete
Use the slow query log to figure out what these slow queries are. Although that ratio (0.11%) is pretty low, there may be a few really slow queries which are causing bigger problems.
Current max_connections = 500; Current threads_connected = 501; You should raise max_connections
Once you have more threads_connected
than max_connections
new connections are rejected. Increase max_connections
(as you already have done).
As you can see, its not always obvious what to do without knowing what sorts of queries WordPress is generating.
...even surpassing it's theorically maximum possible allocation.
[OK] Maximum possible memory usage: 7.3G (46% of installed RAM)
There is not actually a way to calculate maximum possible memory usage for MySQL, because there is no cap on the memory it can request from the system.
The calculation done by mysqltuner.pl is only an estimate, based on a formula that doesn't take into account all possible variables, because if all possible variables were taken into account, the answer would always be "infinite." It's unfortunate that it's labeled this way.
Here is my theory on what's contributing to your excessive memory usage:
thread_cache_size = 128
Given that your max_connections
is set to 200, the value of 128 for thread_cache_size
seems far too high. Here's what makes me think this might be contributing to your problem:
When a thread is no longer needed, the memory allocated to it is released and returned to the system unless the thread goes back into the thread cache. In that case, the memory remains allocated.
http://dev.mysql.com/doc/refman/5.6/en/memory-use.html
If your workload causes even an occasional client thread to require a large amount of memory, those threads may be holding onto that memory, then going back to the pool and sitting around, continuing to hold on to memory they don't technically "need" any more, on the premise that holding on to the memory is less costly than releasing it if you're likely to need it again.
I think it's worth a try to do the following, after first making a note of how much memory MySQL is using at the moment.
Note how many threads are currently cached:
mysql> show status like 'Threads_cached';
+----------------+-------+
| Variable_name | Value |
+----------------+-------+
| Threads_cached | 9 |
+----------------+-------+
1 row in set (0.00 sec)
Next, disable the thread cache.
mysql> SET GLOBAL thread_cache_size = 0;
This disables the thread cache, but the cached threads will stay in the pool until they're used one more time. Disconnect from the server, then reconnect and repeat.
mysql> show status like 'Threads_cached';
Continue disconnecting, reconnecting, and checking until the counter reaches 0.
Then, see how much memory MySQL is holding.
You may see a decrease, possibly significant, and then again you may not. I tested this on one of my systems, which had 9 threads in the cache. Once those threads had all been cleared out of the cache, the total memory held by MySQL did decrease... not by much, but it does illustrate that threads in the cache do release at least some memory when they are destroyed.
If you see a significant decrease, you may have found your problem. If you don't, then there's one more thing that needs to happen, and how quickly it can happen depends on your environment.
If the theory holds that the other threads -- the ones currently servicing active client connections -- have significant memory allocated to them, either because of recent work in their current client session or because of work requiring a lot of memory that was done by another connection prior to them languishing in the pool, then you won't see all of the potential reduction in memory consumption until those threads are allowed to die and be destroyed. Presumably your application doesn't hold them forever, but how long it will take to know for sure whether there's a difference will depend on whether you have the option of cycling your application (dropping and reconnecting the client threads) or if you'll have to just wait for them to be dropped and reconnected over time on their own.
But... it seems like a worthwhile test. You should not see a substantial performance penalty by setting thread_cache_size
to 0. Fortunately, thread_cache_size
is a dynamic variable, so you can freely change it with the server running.
Best Answer
It makes little sense to set this lower than table_open_cache. See the default value described in https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_open_files_limit
You probably don't need to set this so high considering you only have < 900 tables open.
Note that table_cache is an obsolete synonym for table_open_cache. The last one in the my.cnf file takes priority. So you have set your table_open_cache to 60000, overwriting the earlier setting. I suggest deciding on the value you want and use one line to set it, to avoid confusion.
This is fine.
The key buffer is only for MyISAM. If you have no MyISAM tables, the default of 8M is fine.
If you have lots of MyISAM tables, you should convert them to InnoDB. ;-)
Since you upgraded to 64G, this seems low. On the other hand, you have only about 2G of InnoDB data and indexes, which is not unusual for a simple Wordpress site. There's no benefit in increasing the buffer pool any higher.
The default number of BP instances is 8 as of MySQL 5.6.6. Since you have 24 cores, you might want to split the BP at least up to 8 instances.
16x the default of 256K seems too high. Ideally, you shouldn't need to rely on the join buffer because joins should be handled by indexes.
In MySQL 5.6, the join buffer is also used for Batched Key Access optimization, but this doesn't kick in very often.
Also 16x the default of 256K. Useful for avoiding sort merge passes when you do filesorts of large tables. Monitor
SHOW GLOBAL STATUS LIKE 'Sort_merge_passes'
, but I bet you can decrease this.Also 16x the default of 256K. Used for MyISAM tables, or for Multi-Range Read optimization for any storage engine. Very rare to need this.
These are probably excessive. I start out small, and increase tmp_table_size if I see a steady increase in
SHOW GLOBAL STATUS LIKE 'Created_tmp_disk_table'
.Probably unnecessary to have a thread cache at all on your system. The default would be 33. On a modern version of Linux, creating a new thread is quick anyway, so the need for the thread cache is not as important is it is on Solaris, FreeBSD, etc.
Note that table_cache is an obsolete synonym for table_open_cache. The last one in the my.cnf file takes priority. So you have set your table_open_cache to 60000, overwriting the earlier setting. I suggest deciding on the value you want and use one line to set it, to avoid confusion.