YOUR QUERY
SELECT post.postid, post.attach FROM newbb_innopost AS post WHERE post.threadid = 51506;
At first glance, that query should only touches 1.1597% (62510 out of 5390146) of the table. It should be fast given the key distribution of threadid 51506.
REALITY CHECK
No matter which version of MySQL (Oracle, Percona, MariaDB) you use, none of them can fight to one enemy they all have in common : The InnoDB Architecture.
CLUSTERED INDEX
Please keep in mind that the each threadid entry has a primary key attached. This means that when you read from the index, it must do a primary key lookup within the ClusteredIndex (internally named gen_clust_index). In the ClusteredIndex, each InnoDB page contains both data and PRIMARY KEY index info. See my post Best of MyISAM and InnoDB for more info.
REDUNDANT INDEXES
You have a lot of clutter in the table because some indexes have the same leading columns. MySQL and InnoDB has to navigate through the index clutter to get to needed BTREE nodes. You should reduced that clutter by running the following:
ALTER TABLE newbb_innopost
DROP INDEX threadid,
DROP INDEX threadid_2,
DROP INDEX threadid_visible_dateline,
ADD INDEX threadid_visible_dateline_index (`threadid`,`visible`,`dateline`,`userid`)
;
Why strip down these indexes ?
- The first three indexes start with threadid
threadid_2
and threadid_visible_dateline
start with the same three columns
threadid_visible_dateline
does not need postid since it's the PRIMARY KEY and it's embedded
BUFFER CACHING
The InnoDB Buffer Pool caches data and index pages. MyISAM only caches index pages.
Just in this area alone, MyISAM does not waste time caching data. That's because it's not designed to cache data. InnoDB caches every data page and index page (and its grandmother) it touches. If your InnoDB Buffer Pool is too small, you could be caching pages, invalidating pages, and removing pages all in one query.
TABLE LAYOUT
You could shave of some space from the row by considering importthreadid
and importpostid
. You have them as BIGINTs. They take up 16 bytes in the ClusteredIndex per row.
You should run this
SELECT importthreadid,importpostid FROM newbb_innopost PROCEDURE ANALYSE();
This will recommend what data types these columns should be for the given dataset.
CONCLUSION
MyISAM has a lot less to contend with than InnoDB, especially in the area of caching.
While you revealed the amount of RAM (32GB
) and the version of MySQL (Server version: 10.0.12-MariaDB-1~trusty-wsrep-log mariadb.org binary distribution, wsrep_25.10.r4002
), there are still other pieces to this puzzle you have not revealed
- The InnoDB settings
- The Number of Cores
- Other settings from
my.cnf
If you can add these things to the question, I can further elaborate.
UPDATE 2014-08-28 11:27 EDT
You should increase threading
innodb_read_io_threads = 64
innodb_write_io_threads = 16
innodb_log_buffer_size = 256M
I would consider disabling the query cache (See my recent post Why query_cache_type is disabled by default start from MySQL 5.6?)
query_cache_size = 0
I would preserve the Buffer Pool
innodb_buffer_pool_dump_at_shutdown=1
innodb_buffer_pool_load_at_startup=1
Increase purge threads (if you do DML on multiple tables)
innodb_purge_threads = 4
GIVE IT A TRY !!!
As long as you are dealing with two GA releases in the same series (5.6.y to 5.6.x), you should be okay, because that is not an unsupported operation.
To downgrade between General Availability (GA) status versions within the same release series, typically you just install the new binaries on top of the old ones and do not make any changes to the databases.
https://dev.mysql.com/doc/refman/5.6/en/downgrading.html
Or, better (I think) than just copying the new (old version) binaries over the old (new version), use the mechanism I use, where /usr/local/mysql
is a symlink to the directory where the live version binaries are extracted, and data
, inside that directory, is a symlink to the datadir.
This is a perfectly valid configuration and it allows both sets of binaries coexist on the same machine. I use the "Linux generic" binary tarball, which extracts the files into a directory named after the release and OS... as a DBA, I don't let the distro's package manager anywhere near my MySQL. Nobody decides when we upgrade or what version we run, other than me, so upgrades occur when planned and after testing.
With this setup, it's a quick operation to stop the service, move one symlink, and start the service, and I'm running the new version... and similarly quick to switch it back, if you have to. I still do it exactly this way, though I am a lot less edgy about the possibility of needing to roll back.
You also need to run mysql_upgrade
to finalize the upgrade process, but you don't typically have to do it immediately, during a GA to GA release upgrade. You can do it after the server is back up, and, in fact, the server has to be running before you can do that part.
Of course, during any upgrade, it's critical to have a mysqldump
of your data ready to reload in an emergency... but within the same release series, with GA versions, you should be able to pull this off... but also, you shouldn't need to. The biggest jump I recall doing in one shot was 5.1.34 to 5.1.69, with no ill effects, and of the dozens of machines I manage, I don't recall a 5.5 or 5.6 upgrade that I had any reason to roll back.
Another suggestion for before you upgrade is to sweep the entire server for cobwebs with mysqlcheck --check-upgrade --all-databases
which confirms that the tables are fully structurally compatible with the current version you're running... and then mysqlcheck --optimize --all-databases
. The latter is recommended (by me) not because of what it does, but because it's likely to find sketchy structural issues that could cause problems that appear to be upgrade-related, but were in fact latent corruption just waiting to surface. Optimize is time-consuming and will lock each table while running, then unlock it and move to the next, so it's potentially disruptive, but with InnoDB, it also defragments each table, so you might reclaim some disk space in the process. Skip the optimize if you can't tolerate the potentially long locks it needs while it's running.
Best Answer
Yes, you are correct. I wrote about this before: How can I optimize this mySQL table that will need to hold 5+ million rows?
WHen it comes to bulk loading InnoDB tables, one of the most frequently ignored settings to disable is innodb_doublewrite.
InnoDB Architecture
Please note the double write buffer inside the system tablespace file
ibdata1
. Every time a 16K page is written to an InnoDB table or transaction log, the page is written to the double wrote buffer first. All you need to do is disable it, load your data, and reenable it.STEP 01 : Disable Fast Shutdown
STEP 02 : Restart mysql without the double write buffer
STEP 03 : Load the InnoDB data
STEP 04 : Restart mysql
CAVEAT
You can disable the foreign key checks and the unique checks as well. Any standard mysqldump will do that for you already.