Coming at this from the DB point of view, rather than WordPress.
Executive Summary
For each recommendation your optimiser script has generated, check out the appropriate MySQL documentation to see if it will impact your setup. Then tweak your my.conf
little by little.
Your overall objective is for MySQL to load as much as it can into memory and not hit the disk. So increasing various caches will probably help. As long as you don't exceed the physical memory in your server (you need to take into account other processes running on the server). Then, make sure your tables are indexed appropriately.
Your Specific Issues
But to address each of the items you've listed:
Query cache is supported but not enabled; Perhaps you should set the query_cache_size
The query cache is like a big hash table of Select-Statement -> Result-Table. MySQL checks to see if the same query exists in the cache and returns the cached result without re-running the query. If you enable it, run SHOW VARIABLES
to see how well utilised it is (having a huge cache which isn't used is just a waste of memory). My experience of the query cache is that it seems really good in theory, but didn't provide much help in practice.
You have had 11025 queries where a join could not use an index properly; You should enable "log-queries-not-using-indexes" then look for non indexed joins in the slow query log.
Indexes are what makes or breaks a database. Enable the slow query log and use EXPLAIN
to see what queries are not using indexes. Fix the slow ones first.
You have a total of 834 tables; You have 528 open tables; Current table_cache hit rate is 8%, while 132% of your table cache is in use; You should probably increase your table_cache
MySQL ISAM databases exist as pairs (I think) of files on disk, but MySQL doesn't keep them open all the time, only ones used recently. The table_cache setting controls how many files it will keep open. This appears to be related to the number of connections, so be careful setting this too high, but it seems you have stacks of memory available so increase this until all tables are cached.
Current Lock Wait ratio = 1 : 529; You may benefit from selective use of InnoDB.
MyISAM tables are great for reading, but whenever you UPDATE
, INSERT
or DELETE
from them MySQL locks the whole table. So if, for example, you have a Page
table with a HitCount
field which is incremented whenever the page is loaded, the entire Page
table is locked and no other connections can read from it. I've seen some particular nasty combinations of read / write queries which would lock tables for minutes or even hours! Effectively killing the site.
InnoDB isn't as fast at reading, but supports more granular write operations (only locking the one record being updated). So is a better fit for tables which are written to more frequently. Converting tables with large numbers of UPDATE
, INSERT
and DELETE
operations to InnoDB may decrease locking and increase performance. Many apps which use MySQL default to InnoDB across the board for this very reason.
I think you need to drop the old MyISAM tables and re-create them as InnoDB, so there will be downtime involved.
Apparently an ALTER TABLE
statement is all you require to change the engine type. Although full table locks will be required, so you'll have some time when you can't run queries.
I don't know if WordPress assumes InnoDB or MyISAM tables. Please check WordPress before altering the tables' engine.
Current max_heap_table_size = 16 M; Current tmp_table_size = 16 M; Of 107071 temp tables, 25% were created on disk; Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables
MySQL requires memory to do sorting (ORDER BY), JOINs and other operations involving large chunks of data, but only up to a certain limit, after that limit the operation spills out onto disk (so that one giant ORDER BY doesn't use up gigabytes of memory which could better be spent doing other things). Increasing max_heap_table_size
and tmp_table_size
means less operations run on the slow disk and more in fast memory (a discussion about these variables). 64M should be large enough for most cases, and making these too big means you're just wasting memory.
You have 1290 out of 1145245 that take longer than 2.000000 sec. to complete
Use the slow query log to figure out what these slow queries are. Although that ratio (0.11%) is pretty low, there may be a few really slow queries which are causing bigger problems.
Current max_connections = 500; Current threads_connected = 501; You should raise max_connections
Once you have more threads_connected
than max_connections
new connections are rejected. Increase max_connections
(as you already have done).
As you can see, its not always obvious what to do without knowing what sorts of queries WordPress is generating.
I would avoid placing MySQL and PostgreSQL on the same server. They compete for the same resources. If you can, port everything to one RDBMS. My obvious choice would be PostgreSQL.
Then you can set shared_buffers
to something like 500 MB and effective_cache_size
to something like 1.5 GB. Be sure to read hints in the manual.
But I would also recommend to add more physical RAM. 2 GB is not much. Hardly enough for good performance with millions of rows. A few more GB of RAM shouldn't cost much.
If you have to stick with your setup 250 MB for Postgres seems reasonable. If MySQL has three times as much traffic, less might be better overall. Like 128 MB.
Basics for performance optimization in the Postgres Wiki.
Best Answer
There's no formula here. You should limit your database according to what you think is reasonable according to your application needs.
Typically, servers with applications using a connection pool shouldn't need more than a few hundred concurrent connections. Small-Medium sized websites may suffice with 100 - 200.
I usually setup a new server with some 500 - 800 value of
max_connections
and see how it goes. You can always change dynamically viaMake sure, though, you set up a proper
open_files_limit
. On linux, your process is limited to 1024 files, by default. This is very low, since every thread, connection, and, of course, file -- make for a file handle in linux. So setopen_files_limit
to some generous number (say 8192) to clear up your many connections with the operating system.I should note I have worked with MySQL servers with thousands of open connections - it's cool. But, most of the time, the vast majority of these connections would sit and do nothing (be idle).
To sum up, I would use what appears to be normal application needs + some threshold for spike events.