You should just disable the query cache with
[mysqld]
query_cache_size = 0
and then restart mysql. Why would I suggest that ???
The Query Cache will always butt heads with InnoDB. It would be nice if InnoDB's MVCC would let queries be served from the query cache if modifications do not affect repeatable reads for other transactions. Unfortunately, InnoDB just does not do that. Apparently, you have a lot of queries that get invalidated rather quickly and are probably not being reused.
For InnoDB under MySQL 4.0, the query cache was disabled for transactions. For MySQL 4.1+, InnoDB plays traffic cop when allowing access to the query cache on a per-table basis.
From the perspective of your question, I would say that the justification of removing the query cache is not so much the overhead, but how InnoDB manages it.
For more information on how InnoDB interacts with the query cache, please read pages 213-215 of the book "High Performance MySQL (Second Edition)".
If all or the majority of your data is MyISAM, you could go with your original idea of using SQL_NO_CACHE.
If you have a mix of InnoDB and MyISAM, you will have to find the right balance for your application based on on how high your cache misses are. In fact, pages 209-210 of the same book point out reasons for cache misses :
- The query is not cacheable, either because it contains a nondeterministic construct (such as CURRENT_DATE) or because its
result set is too large to store.Both types of uncacheable queries
increment the Qcache_not_cached status variable.
- The server has never seen the query before, so it never had the chance to cache its result.
- The query's result was previously cached, but the server removed it. This can happen because there wasn't enough memory to keep it, because
someone instructed the server to remove it, or because it was
invalidated
and the root causes of high cache misses with few uncacheable queries may be :
- The query cache is not warm yet. That is the server hasn't had a chance to fill the cache with result sets.
- The server is seeing queries it hasn't seen before. If you do not have a lot of repeated queries, this can happen even after the cache
is warmed up.
- There are a lot of cache invalidations.
UPDATE 2012-09-06 10:10 EDT
Looking your latest updated info, you have query_cache_limit
set to 1048576 (1M). This limits any result set to 1M. If you retrieve anything bigger, it will simply not be cached. While you have have query_cache_size
set to 104857600 (100M), this only allows for 100 cached results set in a perfect world. If you perform hundreds of queries, fragmentation will come about rather quickly. You also have 4096 (4K) as the minimum size result set. Unfortunately, mysql has no internal mechanism for defragmenting the query cache.
If you must have the query cache and you have so much RAM, you could execute the following:
SET GLOBAL query_cache_size = 0;
SELECT SLEEP(60);
SET GLOBAL query_cache_size = 1024 * 1024 * 1024;
in order to purge the query cache. You lose all cached results, so run these lines during off-peak hours.
I would also assign the following:
- query_cache_size = 1G
- query_cache_limit = 8M
That leaves 23G of RAM. I would raise the following:
- innodb_buffer_pool_size = 12G
- key_buffer_size = 4G
That leaves 7G. This should be adequate for OS and DB Connections.
Keep in mind that the key buffer caches only MyISAM index pages, while the InnoDB Buffer Pool caches data and indexes.
One more recommendation: upgrade to MySQL 5.5 so you can configure InnoDB for multiple CPU and multiple threads for read/write I/O.
See my earlier posts on using MySQL 5.5 in conjunction with accessing multiple CPUs for InnoDB
UPDATE 2012-09-06 14:56 EDT
My method for clearing the query cache is rather extreme because it hoses cached data and forms a completely different segment of RAM. As you did point out in your comment, FLUSH QUERY CACHE
(as you suggested) or even RESET QUERY CACHE
would be better. For clarification, when I said "no internal mechanism," I meant exactly that. Defragmentation is needed and has to be done manually. It would need to be crontab'd.
If you do DML (INSERTs, UPDATEs, DELETEs) on InnoDB more often than on MyISAM, I would say remove the query cache altogether, which I said in the beginning.
I hate the checking permissions issue.
You may have to disable key checks before the DROP DATABASE
SET unique_checks = 0;
SET foreign_key_checks = 0;
SET GLOBAL innodb_stats_on_metadata = 0;
DROP DATABASE db_madeintouch;
SET GLOBAL innodb_stats_on_metadata = 1;
SET foreign_key_checks = 1;
SET unique_checks = 1;
UPDATE 2013-04-15 18:04 EDT
I just noticed you have innodb_file_per_table OFF. What gives ?
- You currently have all the InnoDB data and the corresponding index sitting in a single file.
- Any CREATE TABLE statement must make data dictionary updates and look for space (small but annoying in this instance)
- Internal Fragmentation of ibdata1
- Dropping a table means scanning the table and its indexes for availability to lock. With data and index pages possibly fragmented, this takes spindles, seek time, and latency.
- See Pictorial Representation of ibdata1 to see everything that goes into ibdata1
Recommendation : Remove all Data and Index Pages from ibdata1
This will give ibdata1 a breather to handle just data dictionary and MVCC management. In addition, ibdata1 will stay rather lean and mean and can be read more quickly.
You will need to perform the InnoDB Infrastructure Cleanup. I wrote out all the steps back on October 29, 2010 in StackOverflow.
UPDATE 2013-04-22 08:10 EDT
Three suggestions
SUGGESTION 1 : I just noticed something else. You are using an ancient version of MySQL (5.0.45). You should think about upgrading to MySQL 5.6.11 as it performs significantly faster that MySQL 5.5 and way faster than MySQL 5.0.
SUGGESTION 2 : You should also go ahead and implement the InnoDB Infrastructure Cleanup.
SUGGESTION 3 : You should also check the disk itself. If the data is sitting on a RAID10 set, one of the disks may have an issues. Check the disk controller's battery as well because it can slow down disk caching and affect read performance.
Best Answer
Please look back at my answer again. Issue #1 might be your one and only problem. Why ?
What can said here ?
There are 2269 queries in the Query Cache using either 2 or 3 blocks (not accounting for fragmentation). Query Cache is 88.0362% (1 - (622/5199)) used. The largest a query can be inside in the Query Cache is 256M, and the smallest is 4K. So, queries are definitely piling up in it when SELECT queries retrieve result sets that's not there before.
If any SELECT query returns more than 256M or less than 4K, none of the data retrieved will be in the query cache. Consequently, running the same SELECT query (that begger than 256M or smaller than 4K) requires reading the data from disk all over again.
All you could really do is lower query_cache_min_res_unit to the minimum value. That's 512, which 0.5K. You can do that dynamically by running
Then, go monitor Qcache_hits. If this solves it for you, go add this to
my.cnf
You don't have to restart mysql since you changed it dynamically.
If this does not change anything, then you need to examine the nature of your app. If your app retrieves data by some SELECT query and you never run that same SELECT query again, Qcache_hits cannot be anything other zero. If Qcache_hits is always going to be 0, that's why I had to bring up Issue #2. No sense in having a query cache on if you never cache anything. It will just nominally slow down InnoDB looking for any changed data. You can be sure that the data retrieved from SELECTs will be sitting in the InnoDB Buffer Pool.
You may need to spend some time examining the size of all your result sets and tuning the query cache to behave and dwell in peace with InnoDB (Why query_cache_type is disabled by default start from MySQL 5.6?)