Run CHECKSUM TABLE command against a table.
There is a caveat for this. According to the MySQL Documentation on CHECKSUM TABLE:
If the checksums for two tables are different, then it is almost
certain that the tables are different in some way. However, because
the hashing function used by CHECKSUM TABLE is not guaranteed to be
collision-free, there is a slight chance that two tables which are not
identical can produce the same checksum.
OK, CHECKSUM
TABLE is 99.99%, not 100%.
How can you aggressively check all the items of a table for matches and mismatches? You can actually run a great tool from Percona Tools called pt-table-sync. It will check for differences in tables between Master and Slave, as long as the table structure is identical.
For example, to sync the table db.tb1
on a Slave in relation to its Master, run pt-table-sync
as follows:
pt-table-sync --print --sync-to-master h=SlaveIP,D=db,t=tbl > SQLtoFixDifferences.sql
When you run this, the table db.tb1
on the Slave is compared to that of its Master using the Primary Key (or Unique Key is there is no Primary Key) of the Slave's db.tb1
. The output will be a series of SQL commands, usually REPLACE INTO
and DELETE FROM
. The script SQLtoFixDifferences.sql
is to be executed on the Slave. When done, the data in db.tb1
should be identical to its counterpart on the Master.
To test it then, run these commands:
CHECKSUM TABLE db.tb1;
SELECT COUNT(1) FROM db.tb1;
The results should be identical on Master and Slave.
CAVEAT
I have never used pt-table-sync. I have been using mk-table-sync (MAATKIT tool from Percona) for years and I know it has always worked for me. I fully trust Percona's pt-table-sync is at the least the same, if not superior, quality.
Are you using innodb? If so, Percona's XtraBackup is a great choice for automating backups while keeping the server operational and responsive. It also supports incremental backups to shorten the backup duration.
Another option is to have another database that is a replication slave and you backup this slave. You can shut down replication, backup, and resume, never impacting the production data.
I personally use a combination of the two solutions. We have a backup sync and we use xtrabackup to backup this server, thus allowing us consistant backups on demand without impacting production load.
Best Answer
Don't even try.
InnoDB has years of optimization behind it. It persists to disk even across unplanned power failures due to its integrated logs.
Furthermore, because of
MEMORY
using table locks (as opposed to InnoDB's row locks), there are benchmarks that say that InnoDB is faster than MEMORY. There could be cases where Memory is faster than InnoDB. But between your moderately high write rate and theJOINs
, InnoDB is very likely to have less interference between the two tasks.If you want full persistence, you need to write to the disk 500 times per second. Are you using SSDs? RAID? An ordinary HDD cannot run at 500 flushes per second. If you have a fast enough drive, you may as well let InnoDB take care of everything for you.
InnoDB's automatic saving and restoring across power failures is very likely to be faster, and much less programming effort for you.