I've been having some performance issues on my production server. The main issue, I think is the table pricelist_entries, which has approximately 6 million rows. Basically its holds all of the products available. The query below usually takes a second to execute but ever since the table pricelist_entries grew past 3 million rows the execution time has started to increase. The weird thing is that sometimes the same query will take 10 seconds, other times 1 second. First I thought that maybe its the query cache but I disabled that and I still get the same variating result. What I can see from the two explains is that when its fast the query only accesses about 5000 rows while when its slow it seem to do an almost full table scan. I can't figure out why this happens. Any help is appreciated.
MySQL Explain gives different result for same query, query performance changes
MySQLperformance
Related Question
- MySQL Performance Tuning for MyISAM
- Explain plan gives different results depending on schema
- Sql-server – Exact same query – Different performance
- MySQL Read lock and query performance analyses
- Mysql performance related question for very large database
- Postgresql – How to improve performance of query for distinct suppliers in big sales table
- Mysql – Same server, same query, different response time
Best Answer
Thanks for confirming that
innodb_buffer_pool_size
is OK.Here are some ideas; I don't know which will be best:
Plan A
Plan B
Please provide
SHOW CREATE TABLE
for each table; there could be some more clues there.More
materials.article_nr_lev = pricelist_entries.peArtnr
-- WheneverJOINing
, the datatypes must be the same (or 'close'). Those two value are probably close enough -- same collation (mandatory) and bothVARCHAR
. But they are of different lengths; I think that is 'close enough'. However, do you sometimes need 255 characters, and sometimes need only 50? For various subtle reasons (having to do with the execution of queries), it is better to keepVARCHAR
sizes smaller (yet large enough to be safe). I suspect the 255 could be changed to 50 without harm.Also you have a lot of 4-byte
INT
fields that could probably be turned into smaller fields (eg, 2-byteSMALLINT UNSIGNED
). Shrinking table sizes often leads to improved performance, albeit minor.category int(11) NOT NULL COMMENT '0 - Inget val 1 - Alkov 2 - ...
-- Consider usingENUM
. That would avoid remembering the mapping. (And take 1 byte instead of 4.)Be careful about
CHARCTER SET
-- some columns aredec8
, some areutf8
. The may be issues with comparisons, conversions, etc. (I haven't noticed any problems with the query in question.)