If you have three or more Slaves, I have a nice suggestion: Use one of the Slaves as a Volunteer to be cloned.
Here is a Topology
MAS --+--> SLV1
|
+--> SLV2
|
+--> SLV0
Say you want to Spawn SLV3
. You could use SLV0
as a Clone
- STEP01) Install MySQL Binaries on
SLV3
(Same version as all other Slaves)
- STEP02) On SLV0,
STOP SLAVE;
- STEP03) On SLV0,
service mysql stop
- STEP04)
scp -r SLV0:/etc/my.cnf SLV3:/etc/my.cnf
- STEP05)
scp -r SLV0:/var/lib/mysql SLV3:/var/lib/mysql
- STEP06) On SLV0,
service mysql start
- STEP07) On SLV3,
chown -R mysql:mysql /var/lib/mysql
- STEP08) On SLV3, change
server_id
in /etc/my.cnf
to be unique from all other Slaves
- STEP09) On SLV3,
service mysql start
That's it.
This is the same paradigm followed by Percona XtraDB Cluster (PXC). When it comes to PXC, introducing a New Slave in PXC is as simple as adding the MasterIP to my.cnf and starting MySQL. All of the above steps are executed internally by PXC using Quorom Selection to choose which Slave becomes the Donor (a.k.a. Volunteer to be Cloned) as well as one of three methods for copying data (xtrabackup, rsync, mysqldump) This copying method is known as SST
(State Snapshot Transfer).
If all of the Application-Level Data are stored in InnoDB only, you should look into using PXC. If you have a mix of InnoDB/MyISAM or all MyISAM, the above 9 steps are to be scripted by you.
ASPECT #1 : Replication
I don't think that
replicate-wild-do-table = db_backup.%
replicate-rewrite-db = db->db_backup
belong together.
Other people have wondered about this as well
The problem stems from the order replication rules are processed. According to the MySQL Documentation on Replication Rules:
If any --replicate-rewrite-db options were specified, they are applied before the --replicate-* filtering rules are tested.
Even the MySQL Documentation on replicate-rewrite-db says:
The database name translation is done before the --replicate-* rules are tested.
The replicate-wild-do-table
is enforced after the rewrite. It would not be surprising if this ordering somehow imposed an INSERT into a table that has data already.
You are probably asking how did the data get there ?
ASPECT #2 : mysqldump
Doing mysqldump --single-transaction
would seem to be the greatest way to point-in-time dumps of data. Unfortunately, mysqldump --single-transaction
has an Achilles' Heel : ALTER TABLE
. If a table is subject to any ALTER TABLE
commands, such as a DROP TABLE
and CREATE TABLE
, that can break the integrity of the transaction the mysqldump was trying to do the dump in. Truncating a table (which is DDL in the MySQL Universe) and dropping and adding indexes can also be as disruptive.
You can find more information on that from MySQL Performance Blog's Best kept MySQLDump Secret. I actually addressed this point in a past question describing 12 commands that can break the integrity of a mysqldump's transaction : MySQL backup InnoDB
CAVEAT
EPILOGUE
One or both of the aspects may have contributed to letting a row slip in during the mysqldump that should not have existed due to either the rewrite rules or the isolation of the mysqldump being overridden.
SUGGESTIONS
I would do a mysqlbinlog dump of all the relay logs since the start of the mysqldump to see all INSERTs that the Slave will process and see if those rows already exist on the Slave. If they do, you could probably do two things:
1 : Skip all the Duplicate Key errors
Simply add this to my.cnf on the Slave
[mysqld]
slave-skip-errors=1062
skip-slave-start
and restart mysql. Then, run START SLAVE;
all the duplicate-key errors will get bypassed. When Seconds_Behind_Master
gets to 0, remove those lines and restart mysql.
2 : Download percona tools
The tools you need are
Use these to find the differences in the Slave, and then correct them
Best Answer
MySQL Replication would probably victimize you in terms of Replication Lag.
If you run
SHOW SLAVE STATUS\G
on a Slave whose Master is write-heavy, the data you need to read may not be ready to read on the Slave until the Replication Lag (Seconds_Behind_Master) is 0.SUGGESTION
If you want reads available on a Slave as well as on a Master, consider using Percona XtraDB Cluster. It does synchronized replication so that the same data exists at any point in time on both Master and Slave. It does not require binary logging or MySQL Replication. In addition, you can use a LoadBalanced DBVIP against Master and Slave to distribute reads.
The only drawback is that this might produce slower writes.
Provided all data is InnoDB, mysqldump backups can be done against the Slave using --single-transaction.
Here are my past posts on Percona XtraDB Cluster
Aug 20, 2012
: Best solution for cross-datacenter MySQL master-slave replicationJul 25, 2012
: Scaling Percona datacenters: setup and replicationAug 26, 2011
: MySQL Master/Slave Master/Master setup?