YOUR QUERY
SELECT post.postid, post.attach FROM newbb_innopost AS post WHERE post.threadid = 51506;
At first glance, that query should only touches 1.1597% (62510 out of 5390146) of the table. It should be fast given the key distribution of threadid 51506.
REALITY CHECK
No matter which version of MySQL (Oracle, Percona, MariaDB) you use, none of them can fight to one enemy they all have in common : The InnoDB Architecture.
CLUSTERED INDEX
Please keep in mind that the each threadid entry has a primary key attached. This means that when you read from the index, it must do a primary key lookup within the ClusteredIndex (internally named gen_clust_index). In the ClusteredIndex, each InnoDB page contains both data and PRIMARY KEY index info. See my post Best of MyISAM and InnoDB for more info.
REDUNDANT INDEXES
You have a lot of clutter in the table because some indexes have the same leading columns. MySQL and InnoDB has to navigate through the index clutter to get to needed BTREE nodes. You should reduced that clutter by running the following:
ALTER TABLE newbb_innopost
DROP INDEX threadid,
DROP INDEX threadid_2,
DROP INDEX threadid_visible_dateline,
ADD INDEX threadid_visible_dateline_index (`threadid`,`visible`,`dateline`,`userid`)
;
Why strip down these indexes ?
- The first three indexes start with threadid
threadid_2
and threadid_visible_dateline
start with the same three columns
threadid_visible_dateline
does not need postid since it's the PRIMARY KEY and it's embedded
BUFFER CACHING
The InnoDB Buffer Pool caches data and index pages. MyISAM only caches index pages.
Just in this area alone, MyISAM does not waste time caching data. That's because it's not designed to cache data. InnoDB caches every data page and index page (and its grandmother) it touches. If your InnoDB Buffer Pool is too small, you could be caching pages, invalidating pages, and removing pages all in one query.
TABLE LAYOUT
You could shave of some space from the row by considering importthreadid
and importpostid
. You have them as BIGINTs. They take up 16 bytes in the ClusteredIndex per row.
You should run this
SELECT importthreadid,importpostid FROM newbb_innopost PROCEDURE ANALYSE();
This will recommend what data types these columns should be for the given dataset.
CONCLUSION
MyISAM has a lot less to contend with than InnoDB, especially in the area of caching.
While you revealed the amount of RAM (32GB
) and the version of MySQL (Server version: 10.0.12-MariaDB-1~trusty-wsrep-log mariadb.org binary distribution, wsrep_25.10.r4002
), there are still other pieces to this puzzle you have not revealed
- The InnoDB settings
- The Number of Cores
- Other settings from
my.cnf
If you can add these things to the question, I can further elaborate.
UPDATE 2014-08-28 11:27 EDT
You should increase threading
innodb_read_io_threads = 64
innodb_write_io_threads = 16
innodb_log_buffer_size = 256M
I would consider disabling the query cache (See my recent post Why query_cache_type is disabled by default start from MySQL 5.6?)
query_cache_size = 0
I would preserve the Buffer Pool
innodb_buffer_pool_dump_at_shutdown=1
innodb_buffer_pool_load_at_startup=1
Increase purge threads (if you do DML on multiple tables)
innodb_purge_threads = 4
GIVE IT A TRY !!!
Observations:
Version: 5.5.41-0ubuntu0.14.04.1
64 GB of RAM
Uptime = 3d 03:40:44
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues
Increase innodb_buffer_pool_size
to at least 1G (not more than 45G).
Sounds like most of your 64GB is unused? Or do you expect a huge growth in data?
Turn off the Query Cache; it does not seem to be useful:
query_cache_size = 0
query_cache_type = OFF
Set long_query_time = 2
and turn on the slowlog.
After a day, run pt-query-digest
to find the worst queries.
Then work on optimizing them. (Composite indexes, reformulating queries, etc. Ask for help if needed.)
Supporting details and other suggestions
( innodb_buffer_pool_size / _ram ) = 128M / 65536M = 0.20% -- % of RAM used for InnoDB buffer_pool
( (key_buffer_size / 0.20 + innodb_buffer_pool_size / 0.70) / _ram ) = (16M / 0.20 + 128M / 0.70) / 65536M = 0.40% -- Most of available ram should be made available for caching.
-- http://mysql.rjweb.org/doc.php/memory
[!!] InnoDB buffer pool / data size: 128.0M/873.0M
So innodb_buffer_pool_size should be at least 1G.
[!!] InnoDB Write buffer efficiency: 0.00% (0 hits/ 1 total)
0 out of 1 -- bogus to mark it "!!"
( Innodb_pages_written / Innodb_buffer_pool_write_requests ) = 2,068,247 / 5340758 = 38.7% -- Write requests that had to hit disk
-- Check innodb_buffer_pool_size
( innodb_file_per_table ) = OFF -- Put each file in its own tablespace
-- (Mildly recommended, especially for large tables)
( Qcache_hits / Qcache_inserts ) = 1,765,289 / 2,509,742 = 0.703 -- Hit to insert ratio -- high is good
-- Consider turning off the query cache.
( (query_cache_size - Qcache_free_memory) / Qcache_queries_in_cache / query_alloc_block_size ) = (16M - 14960424) / 1477 / 8192 = 0.15 -- query_alloc_block_size vs formula
-- Adjust query_alloc_block_size
[!!] Query cache prunes per day: 15154
( Created_tmp_disk_tables / (Created_tmp_disk_tables + Created_tmp_tables) ) = 3,090 / (3090 + 4930) = 38.5% -- Percent of temp tables that spilled to disk
-- maybe increase tmp_table_size and max_heap_table_size; avoid blobs, etc.
( Select_scan ) = 459,980 / 272444 = 1.7 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( Select_scan / Com_select ) = 459,980 / 2518409 = 18.3% -- % of selects doing full table scan. (May be fooled by Stored Routines.)
-- Add indexes / optimize queries
( binlog_format ) = STATEMENT -- STATEMENT/ROW/MIXED. ROW is preferred; it may become the default.
( log_slow_queries ) = OFF -- Whether to log slow queries. (Before 5.1.29, 5.6.1)
( slow_query_log ) = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10.000000 = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( Connections ) = 2,305,560 / 272,444 = 8.5 /sec -- Connections
-- Increase wait_timeout; use pooling?
( Max_used_connections ) = 66 -- How many simultaneous connections you had (highwater mark).
[!!] Highest connection usage: 100% (152/151)
Those disagree; perhaps you restarted?
( open_files_limit ) = 1,024 -- ulimit -n
-- To allow more files, change ulimit or /etc/security/limits.conf or in sysctl.conf (kern.maxfiles & kern.maxfilesperproc) or something else (OS dependent)
-- If you get more tables, raising this will be important.
( Opened_tables ) = 43/hour
[!!] Table cache hit rate: 6% (400 open / 5K opened)
These disagree; perhaps there is no problem here.
[!!] Key buffer used: 18.3% (3M used / 16M cache)
[!!] Read Key buffer hit rate: 91.7% (743 cached / 62 reads)
Not a problem since there is virtually no MyISAM usage and key_buffer_size is only a tiny percentage of RAM.
19 issues flagged, out of 133 computed Variables/Status/Expressions
My take on Tuner's recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance -- waste of time!
Enable the slow query log to troubleshoot bad queries -- yes
Reduce or eliminate persistent connections to reduce connection usage -- ok
Adjust your join queries to always utilize indexes -- yes (use slowlog to find them)
When making adjustments, make tmp_table_size/max_heap_table_size equal -- ok
Reduce your SELECT DISTINCT queries which have no LIMIT clause -- yawn
Increase table_open_cache gradually to avoid file descriptor limits --
ok, but may not be critical
Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Beware that open_files_limit (1024) variable -- noted
should be greater than table_open_cache ( 400) -- it is
max_connections (> 151) -- No; figure out why there are so many connections
wait_timeout ( 16M) -- probably not important
join_buffer_size (> 128.0K, or always use indexes with joins) -- might help, might not
tmp_table_size (> 16M) -- first look for other ways to improve queries
max_heap_table_size (> 16M)
table_open_cache (> 400)
innodb_buffer_pool_size (>= 872M) if possible. -- yes
Best Answer
Analysis of STATUS and VARIABLES
Observations:
The More Important Issues:
Increase
innodb_io_capacity
to 500.Lower
innodb_lru_scan_depth
to 64Why?:
innodb_change_buffering = none
Normally it is a good feature.Lots of active threads; was the server melting down as you ran
SHOW GLOBAL STATUS
?Com_admin_commands is huge! What admin commands are you doing? Possibly something right before or right after each
INSERT
? (About 300/sec for each)80 queries per minute are taking more than 10 seconds. Turn on the slowlog, etc.
Are you using Galera?
Details and other observations:
( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) = (0 + 0 + 273050 + 951182390 + 0 + 951175516) / 4547760 = 418 /sec
-- IOPs? -- If the hardware can handle it, set innodb_io_capacity (now 200) to about this value.( ( Key_reads + Key_writes + Innodb_pages_read + Innodb_pages_written + Innodb_dblwr_writes + Innodb_buffer_pool_pages_flushed ) / innodb_io_capacity / Uptime ) = ( 0 + 0 + 273050 + 951182390 + 0 + 951175516 ) / 200 / 4547760 = 209.2%
-- This may be a metric indicating what innodb_io_capacity is set reasonably. -- Increase innodb_io_capacity (now 200) if the hardware can handle it.( innodb_buffer_pool_instances ) = 64
-- For large RAM, consider using 1-16 buffer pool instances, not allowing less than 1GB each. Also, not more than, say, twice the number of CPU cores. -- Recommend no more than 16.( innodb_lru_scan_depth * innodb_buffer_pool_instances ) = 1,024 * 64 = 65,536
-- A metric of CPU usage. -- Lower either number.( innodb_lru_scan_depth * innodb_page_cleaners ) = 1,024 * 32 = 32,768
-- Amount of work for page cleaners every second. -- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixable by lowering lru_scan_depth: Consider 1000 / innodb_page_cleaners (now 32). Also check for swapping.( innodb_page_cleaners / innodb_buffer_pool_instances ) = 32 / 64 = 0.5
-- innodb_page_cleaners -- Recommend setting innodb_page_cleaners (now 32) to innodb_buffer_pool_instances (now 64)( innodb_lru_scan_depth ) = 1,024
-- "InnoDB: page_cleaner: 1000ms intended loop took ..." may be fixed by lowering lru_scan_depth( innodb_io_capacity_max / innodb_io_capacity ) = 2,000 / 200 = 10
-- Capacity: max/plain -- Recommend 2. Max should be about equal to the IOPs your I/O subsystem can handle. (If the drive type is unknown 2000/200 may be a reasonable pair.)( (Innodb_buffer_pool_reads + Innodb_buffer_pool_pages_flushed) ) = ((242249 + 951175516) ) / 4547760 = 209 /sec
-- InnoDB I/O -- Increase innodb_buffer_pool_size (now 257698037760)?( Innodb_buffer_pool_pages_flushed ) = 951,175,516 / 4547760 = 209 /sec
-- Writes (flushes) -- Increase innodb_buffer_pool_size (now 257698037760)?( innodb_change_buffering ) = innodb_change_buffering = none
-- Pre-5.6.11 / 5.5.31, there was a bug that made ="changes" a safer option.( innodb_doublewrite ) = innodb_doublewrite = OFF
-- Extra I/O, but extra safety in crash. -- OFF is OK for FusionIO, Galera, Slaves, ZFS.( Innodb_os_log_written ) = 1,867,005,211,648 / 4547760 = 410532 /sec
-- This is an indicator of how busy InnoDB is. -- Very idle or very busy InnoDB.( Innodb_log_writes ) = 1,029,874,951 / 4547760 = 226 /sec
( innodb_flush_method ) = innodb_flush_method = fsync
-- How InnoDB should ask the OS to write blocks. Suggest O_DIRECT or O_ALL_DIRECT (Percona) to avoid double buffering. (At least for Unix.) See chrischandler for caveat about O_ALL_DIRECT( default_tmp_storage_engine ) = default_tmp_storage_engine =
( innodb_flush_neighbors ) = 1
-- A minor optimization when writing blocks to disk. -- Use 0 for SSD drives; 1 for HDD.( ( Innodb_pages_read + Innodb_pages_written ) / Uptime / innodb_io_capacity ) = ( 273050 + 951182390 ) / 4547760 / 200 = 104.6%
-- If > 100%, need more io_capacity. -- Increase innodb_io_capacity (now 200) if the drives can handle it.( innodb_io_capacity ) = 200
-- I/O ops per second capable on disk . 100 for slow drives; 200 for spinning drives; 1000-2000 for SSDs; multiply by RAID factor.( sync_binlog ) = 0
-- Use 1 for added security, at some cost of I/O =1 may lead to lots of "query end"; =0 may lead to "binlog at impossible position" and lose transactions in a crash, but is faster.( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF
-- Whether to log all Deadlocks. -- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.( max_connections ) = 15,934
-- Maximum number of connections (threads). Impacts various allocations. -- If max_connections (now 15934) is too high and various memory settings are high, you could run out of RAM.( character_set_server ) = character_set_server = latin1
-- Charset problems may be helped by setting character_set_server (now latin1) to utf8mb4. That is the future default.( local_infile ) = local_infile = ON
-- local_infile (now ON) = ON is a potential security issue( tmp_table_size ) = 128M
-- Limit on size of MEMORY temp tables used to support a SELECT -- Decrease tmp_table_size (now 134217728) to avoid running out of RAM. Perhaps no more than 64M.( (Com_insert + Com_update + Com_delete + Com_replace) / Com_commit ) = (1371247297 + 4324897 + 1 + 0) / 1 = 1.38e+9
-- Statements per Commit (assuming all InnoDB) -- Low: Might help to group queries together in transactions; High: long transactions strain various things.( Select_full_join / Com_select ) = 3,469 / 49002 = 7.1%
-- % of selects that are indexless join -- Add suitable index(es) to tables used in JOINs.( Select_scan / Com_select ) = 83,957 / 49002 = 171.3%
-- % of selects doing full table scan. (May be fooled by Stored Routines.) -- Add indexes / optimize queries( Com_insert + Com_delete + Com_delete_multi + Com_replace + Com_update + Com_update_multi ) = (1371247297 + 1 + 0 + 0 + 4324897 + 0) / 4547760 = 302 /sec
-- writes/sec -- 50 writes/sec + log flushes will probably max out I/O write capacity of normal drives( Com_admin_commands ) = 1,350,052,920 / 4547760 = 296 /sec
-- Why so many DDL statements?( Com_admin_commands / Queries ) = 1,350,052,920 / 1387441904 = 97.3%
-- Percent of queries that are "admin" commands. -- What's going on?( Com_set_option / Com_select ) = 3,505,412 / 49002 = 7153.6%
-- It seems 'wrong' to do more SETs than SELECTs.( Com__biggest ) = Com__biggest = Com_insert
-- Which of the "Com_" metrics is biggest. -- Normally it is Com_select (now 49002). If something else, then it may be a sloppy platform, or may be something else.( binlog_format ) = binlog_format = MIXED
-- STATEMENT/ROW/MIXED. -- ROW is preferred by 5.7 (10.3)( slow_query_log ) = slow_query_log = OFF
-- Whether to log slow queries. (5.1.12)( long_query_time ) = 10
-- Cutoff (Seconds) for defining a "slow" query. -- Suggest 2( Slow_queries ) = 5,897,189 / 4547760 = 1.3 /sec
-- Frequency (Slow queries per sec) -- Rework slow guys; improve indexes; watch disk space for slow log file( Max_used_connections ) = 597
-- High-water mark for connections -- Lots of inactive connections is OK; over 100 active connections is likely to be a problem. Max_used_connections (now 597) does not distinguish them; Threads_running (now 299) is instantaneous.( Threads_running - 1 ) = 299 - 1 = 298
-- Active threads (concurrency when data collected) -- Optimize queries and/or schema( thread_pool_size ) = 40
-- Number of 'thread groups'. Limits how many treads can be executing at once. Probably should not be much bigger than the number of CPUs. -- Don't set much higher than the number of CPU cores.You have the Query Cache half-off. You should set both query_cache_type = OFF and query_cache_size = 0 . There is (according to a rumor) a 'bug' in the QC code that leaves some code on unless you turn off both of those settings.
Abnormally small:
Abnormally large:
Abnormal strings: