To answer your question about how MySQL replicates all databases, here is how it works...
- Master has Binary Logging enabled
- Master records completed SQL Statements
- Master will inject 'USE dbname;' among the SQL Statements it records
- Slave connects to Master (IO Thread)
- Slave IO Thread requests next SQL statement from the Binary Logs
- Slave IO Thread copies it to its Relay Logs
- Slave SQL Thread Processes its Relay Log Entries FIFO (Queue)
Once USE dbname;
is issued out of the relay logs, multiple SQL statements will be processes chronologically until another USE dbname;
(a different database) is issued from the Relay Logs.
To see this, pick any Binary Log (except the Current One) on the Master, and run this:
mysqlbinlog binarylogname > SQLStatements.sql
To see all the binary logs and the current binary log on the Master, run these:
SHOW BINARY LOGS;
SHOW MASTER STATUS;
Long story short: By default, all DBs are replicated due to the USE dbname;
command within the binary logs on the master and the relay logs on the slave.
Abdul's answer shows how to filter in or filter out databases from being processed by the slave.
OBSERVATION #1
You mentioned Ask developers put all the temporary tables into a separated database
If your developers are using CREATE TEMPORARY TABLE commands to create temporary tables, they need to use CREATE TABLE instead. Here is why:
With MySQL Replication processing a temporary table, this is what occurs
- 1) Master run
CREATE TEMPORARY TABLE
- 2) Command inserted into binary log
- 3) Replication copies this over to the Slave's Relay Logs via I/O Thread
- 4) Slave SQL Thread runs
CREATE TEMPORARY TABLE
- 5) Slave processes data with that temp table
Once in a while, someone may run STOP SLAVE;
to run a backup. If STOP SLAVE;
is issued just after step 4, the temp created disappears and so does its data. When you run START SLAVE;
Replication breaks instantly complaining the table does not exist. This is normal because when a DB Connections terminates deliberately or accidently, all temp tables opened using CREATE TEMPORARY TABLE
in the DB session are dropped. Running STOP SLAVE;
kill the SQL thread who was holding opening the temp table.
The only workaround for this is to create the table using CREATE TABLE
instead of CREATE TEMPORARY TABLE
. When run STOP SLAVE;
, the temp table you created normally does not disappear.
I have seen this happen maybe 10 times in my DBA career. Fixing it using the binary logs to find out the name of the temp tables, to create those tables using CREATE TABLE
, then starting replication up was the only maintenance possible without just brute force copying the master.
OBSERVATION #2
mk-table-sync
only works on tables with primary keys and/or unique keys. It works maybe 99% of the time. I have seen instances where the checksum of a table on the master and slave were different. I would run mk-table-sync
, there were still differences (Of course, I was doing mk-table-sync
in circular replication with 3 masters, which can be a little dangerous. Using it in Master/Slave is far more stable)
OBSERVATION #3
You mentioned There is some unsafe queries. Does it get some problems with MIXED based replication?
It depends. The most popular unsafe query is any UPDATE or DELETE that uses ORDER BY ... LIMIT
. With SBR, this could possibly cause MySQL to UPDATE or DELETE rows from a table on the Slave in a different order tham that of the Master. With RBR, I believe the exact changes in a row are more identifiable to UPDATE or DELETE on the Slave.
SOLUTION : Avoid using unsafe queries. Then, you will not worry !!!
OBSERVATION #4
I just read your second link. ROFL !!! I am familiar with the poster of the answer.
Best Answer
ASPECT #1 : Replication
I don't think that
belong together.
Other people have wondered about this as well
Jun 14, 2012
: Configured MySQL replication but its not workingJun 13, 2012
: What is the difference between 'replicate-rewrite-db' and 'replicate-do-db ' during mysql replication ?May 31, 2012
: Mysql replicate-rewrite-db not workingThe problem stems from the order replication rules are processed. According to the MySQL Documentation on Replication Rules:
Even the MySQL Documentation on replicate-rewrite-db says:
The
replicate-wild-do-table
is enforced after the rewrite. It would not be surprising if this ordering somehow imposed an INSERT into a table that has data already.You are probably asking how did the data get there ?
ASPECT #2 : mysqldump
Doing
mysqldump --single-transaction
would seem to be the greatest way to point-in-time dumps of data. Unfortunately,mysqldump --single-transaction
has an Achilles' Heel :ALTER TABLE
. If a table is subject to anyALTER TABLE
commands, such as aDROP TABLE
andCREATE TABLE
, that can break the integrity of the transaction the mysqldump was trying to do the dump in. Truncating a table (which is DDL in the MySQL Universe) and dropping and adding indexes can also be as disruptive.You can find more information on that from MySQL Performance Blog's Best kept MySQLDump Secret. I actually addressed this point in a past question describing 12 commands that can break the integrity of a mysqldump's transaction : MySQL backup InnoDB
CAVEAT
EPILOGUE
One or both of the aspects may have contributed to letting a row slip in during the mysqldump that should not have existed due to either the rewrite rules or the isolation of the mysqldump being overridden.
SUGGESTIONS
I would do a mysqlbinlog dump of all the relay logs since the start of the mysqldump to see all INSERTs that the Slave will process and see if those rows already exist on the Slave. If they do, you could probably do two things:
1 : Skip all the Duplicate Key errors
Simply add this to my.cnf on the Slave
and restart mysql. Then, run
START SLAVE;
all the duplicate-key errors will get bypassed. When
Seconds_Behind_Master
gets to 0, remove those lines and restart mysql.2 : Download percona tools
The tools you need are
Use these to find the differences in the Slave, and then correct them