Mysql – Poor MySQL for Drupal on Shared Hosting

MySQLoptimizationperformancephpmyadmin

My Drupal site is on a shared host where I cannot modify the my.cnf configurations but the database seems to be performing poorly. Here are some of the flagged issues that I can see…

In phpMyAdmin, the following are flagged RED in under the Status tab:

Slow_queries    613.2 k 
Innodb_buffer_pool_reads    15.9 M
Innodb_log_waits    10
Innodb_row_lock_time_avg    740
Innodb_row_lock_time_max    45.1 k
Innodb_row_lock_waits   296
Handler_read_rnd    14.7 G
Handler_read_rnd_next   1.2 T
Qcache_lowmem_prunes    142.2 M
Slow_launch_threads 17
Created_tmp_disk_tables 31.7 M
Select_full_join    1.2 M
Select_range_check  212.8 k
Sort_merge_passes   174.8 k
Opened_tables   36.5 M  
Table_locks_waited  512.5 k

Running http://day32.com/MySQL/tuning-primer.sh also flagged these issues:

QUERY CACHE
Query cache is enabled
Current query_cache_size = 512 M
Current query_cache_used = 488 M
Current query_cache_limit = 32 M
Current Query cache Memory fill ratio = 95.38 %
Current query_cache_min_res_unit = 4 K
However, 136119288 queries have been removed from the query cache due to lack of memory
Perhaps you should raise query_cache_size

JOINS
Current join_buffer_size = 4.00 M
You have had 1117666 queries where a join could not use an index properly
You have had 207168 joins without keys that check for key usage after each row
join_buffer_size >= 4 M
This is not advised

TABLE CACHE
Current table_open_cache = 2048 tables
Current table_definition_cache = 256 tables
You have a total of 200 tables
You have 2048 open tables.
Current table_cache hit rate is 0%
, while 100% of your table cache is in use
You should probably increase your table_cache

TEMP TABLES
Current max_heap_table_size = 256 M
Current tmp_table_size = 256 M
Of 53194631 temp tables, 36% were created on disk
Perhaps you should increase your tmp_table_size and/or max_heap_table_size
to reduce the number of disk-based temporary tables

I'm not a DBA and have very little experience in DB optimization. I would really appreciate some help. What can I do to improve the database performance if I can't modify my.cnf configurations accordingly?

EDIT:
The tables have indexes, but perhaps they aren't optimal? My database knowledge is limited so it's hard for me to say. Drupal generates TONS of queries to the database. I did check the slow query logs and for one of the queries, here is the result of EXPLAIN:

EXPLAIN SELECT menu_links.link_path AS link_path, menu_links.mlid AS mlid, menu_links.router_path AS router_path, menu_links.updated AS updated
FROM
dr_menu_links menu_links
WHERE ( (updated = '1') OR( (router_path NOT IN  ('rss.xml', 'node', 'checkout', 'import', 'image_captcha', 'my-favourites', 'my-orders', 'print', 'search')) AND (external = '0') AND (customized = '1') ))

+----+-------------+------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+------+-------------+
|  1 | SIMPLE      | menu_links | ALL  | router_path   | NULL | NULL    | NULL |  757 | Using where |
+----+-------------+------------+------+---------------+------+---------+------+------+-------------+

Other queries in the slow query logs include commands like show databases!

Best Answer

Before you try to change the configuration file, the first thing to check is index use. This part:

JOINS
Current join_buffer_size = 4.00 M
You have had 1117666 queries where a join could not use an index properly
You have had 207168 joins without keys that check for key usage after each row

suggests that many queries cannot use indexes, probably because there are no indexes to use.

Ask for the slow query log, check which queries are slow and check/test if indexes can be added to improve performance.

A general advice is first, to have indexes on all columns that are used in Joins and second, indexes on columns that are used in WHERE, GROUP BY and ORDER BY clauses. But for this, you may need compound indexes and of course you can't create an index for every column combination. That would take too much space and would make Insert, Delete and Update statements much slower. So, you'll have to check what are the most common queries and optimize those first.

There are some tools/services that can help you identify better and faster where the bottlenecks are. One such tool is Percona Toolkit (also known as mk-query-digest, from Maatkit).

A slow query that needs 10 seconds to complete is not good. But it doesn't really affect performance if you are executing it once per hour. A query that needs 60ms can be really bad if it is a simple UPDATE and is executed several times per second. These tools can help you identify those because you can analyze the logs and find total running time, number of times a query has been run and various other figures.


For your example query:

SELECT menu_links.link_path AS link_path
     , menu_links.mlid AS mlid
     , menu_links.router_path AS router_path
     , menu_links.updated AS updated
FROM
    dr_menu_links menu_links
WHERE (  (updated = '1') 
      OR (  (router_path NOT IN  ( 'rss.xml'  , 'node'         , 'checkout'
                                 , 'import'   , 'image_captcha', 'my-favourites'
                                 , 'my-orders', 'print'        , 'search' 
                                 ) ) 
        AND (external = '0') 
        AND (customized = '1') 
         )
      )

possible good indexes are an index on (updated) and one on (external, customized, router_path)