Surprisingly, that's not gibberish.
That indeed appears at the top of binlogs whenever you do mysqlbinlog to a binary log generated using MySQL 5.1 and MySQL 5.5. You will not see that gibberish in binary logs for MySQL 5.0 and back.
This is why the start point for replication from an empty binary log is
- 107 for MySQL 5.5
- 106 for MySQL 5.1
- 98 for MySQL 5.0 and back
This is good to remember if you do MySQL Replication where the Master if MySQL 5.1 and the slave is MySQL 5.0. This could present a really big headache.
Replication from Master using 5.0 and Slave using 5.1 works fine, not the other way around.(According to MySQL Documentation, it is generally not supported for 3 reasons: 1) Binary Log Format, 2) Row-based Replication, 3) SQL Incompatibility).
Anyway, do a mysqlbinlog on the offending binary log on the master. If the resulting dump produces gibberish in the middle of the dump (which I have seen a couple of times in my DBA career) you may have to skip to position 98 (MySQL 5.0) or 106 (MySQL 5.1) or 107 (MySQL 5.5) of the master's next binary log and start replicating from there (SOB :( you may need to use MAATKIT tools mk-table-checksum and mk-table-sync to reload master changes not on the slave [if you want to be a hero]; even worse, mysqldump the master and reload the slave and start replication totally over [if you don't want to be a hero])
If the mysqlbinlog of the master is completely readable after the top gibberish you saw, it is possible the master's binary log is fine but the relay log on the slave is corrupt (due to transmission/CRC errors). If that's the case, just reload the relay logs by issuing the CHANGE MASTER TO command as follows:
STOP SLAVE;
CHANGE MASTER TO
MASTER_HOST='< master-host ip or DNS >',
MASTER_PORT=3306,
MASTER_USER='< usernmae >',
MASTER_PASSWORD='< password >',
MASTER_LOG_FILE='< MMMM >',
MASTER_LOG_POS=< PPPP >;
START SLAVE;
Where
- MMMM is the last file used from the Master that was last processed on the Slave
- PPPP is the last position used from the Master that was last processed on the Slave
You can get MMMM and PPPP by doing SHOW SLAVE STATUS\G
and using
- Relay_Master_Log_File for MMMM
- Exec_Master_Log_Pos for PPPP
Try it out and let me know !!!
BTW running CHANGE MASTER TO command erases the slave's current relay logs and starts fresh.
Question 1
Does the DML operations committed by db2 during the replication process gets included in its own bin-log?
Answer to Question 1
Yes it will, provided you have this in /etc/my.cnf on both db1 and db2
[mysqld]
log-slave-updates
If you do not have this, add it and restart mysql
Question 2
Would the resulting bin-log in db2 be exactly the same with the bin-log of db1, to the letter?
Answer to Question 2
Yes. Make sure the clocks on both DB servers are synchronized
Question 3
What happens to the entries in db2 relay-log once they are committed to the database during the replication process, are they discarded? What role does the relay-log info log has in this?
Answer to Question 3
In MySQL Replication, the IO Thread of a Slave will read its Master's bin-log entries and store them in a FIFO queue. For each relay log in a slave, when every entry in the currently processed relay is executed it is rotated out and discarded. If relay logs are piling up, this quickly indicates that the SQL thread died because of any SQL error. Just do SHOW SLAVE STATUS\G
to find out what stopped the SQL thread. The IO Thread would conitnue collecting completed SQL statements from its Master.
Question 4
How does db1 know where in the bin-log of db2 (somehow dependent on the answer of Question 2), it will start the replication process?
Answer to Question 4
When you do SHOW SLAVE STATUS\G
, look for the following lines:
- Master_Log_File : The latest binary log whose most recently command was copied to the Slave
- Read_Master_Log_Pos : The latest position of the latest binary log whose most recently command was copied to the Slave
- Relay_Master_Log_File : The latest binary log whose most recently command was executed on the Slave
- Exec_Master_Log_Pos : The latest binary log whose most recently command was executed on the Slave
- Relay_Log_Space : The sum total (in bytes) of all relay logs. By default, each relay log is the default size of a binary log (1G). If Relay_Log_Space starts to significant exceed 1G, this indicates one of two things:
- SQL thread died due to SQL Error
- SQL thread is busy with a long-running query
Question 4.1
If you enable log-slave-updates on both databases i.e. dB1 & dB2, then that would mean all items from the binary log of dB1, which was successfully replicated by dB2 will be written into dB2's binary log and vice-versa. Would not this result to some sort of infinite circular replication or duplications of entries on both databases, if it's possible at all, considering the possible key-collision issues that would arise? What I'm trying to say is, How would dB1 know once it checks on the binary log of dB2 that, "I should not replicate those entries in there because they all just came from me"?
Answer to Question 4.1
You must have log-slave-updates available on both DB servers in order to have an audit trail that the SQL executed on on DB server made it to the other. If you don't, you would have to do your due diligence to compare the data explicitly. Such ways would include:
- Running CHECKSUM TABLE on every table you have in both DB servers to compare their contents.
- Using pt-table-checksum, which is an automated version of running CHECKSUM TABLE between Master and one or more Slaves
You need not worry about infinite circular replication unless you are dealing with more that two masters. There have been rare times when someone with, let's say four Masters, removes one of the four servers from circular rep cluster. Let's suppose the the server_id is 13. It is remotely, but still, possible for binary log entries whose server_id belongs to the server that removed to be inside the relay logs on other servers. Only in such a scenario would you worry about infinite circular replication.
To circumvent such situations, MySQL 5.5 has a new option for the CHANGE MASTER TO command called IGNORE_SERVER_IDS
. You would do the following to repair things on all the remaining servers:
STOP SLAVE;
CHANGE MASTER TO IGNORE_SERVER_IDS = (13);
START SLAVE;
In fact, here is what the MySQL Documentation says on this:
IGNORE_SERVER_IDS was added in MySQL 5.5. This option takes a comma-separated list of 0 or more server IDs. Events originating from
the corresponding servers are ignored, with the exception of log
rotation and deletion events, which are still recorded in the relay
log.
In circular replication, the originating server normally acts as the terminator of its own events, so that they are not applied more
than once. Thus, this option is useful in circular replication when
one of the servers in the circle is removed. Suppose that you have a
circular replication setup with 4 servers, having server IDs 1, 2, 3,
and 4, and server 3 fails. When bridging the gap by starting
replication from server 2 to server 4, you can include
IGNORE_SERVER_IDS = (3) in the CHANGE MASTER TO statement that you
issue on server 4 to tell it to use server 2 as its master instead of
server 3. Doing so causes it to ignore and not to propagate any
statements that originated with the server that is no longer in use.
Question 5
On INSERT queries on the master, what form of the query is written into the binary log? Is it the 'raw' form of the query, or the one which already has the auto-generated value of the auto-increment key?
Answer to Question 5
Whichever form is presented. Here is what I mean: The raw form would usually not include the auto_increment column expressed explicitly. On the other hand, it you import a mysqldump into a DB server with binary logging, the rows being inserted would explicitly be given. Either version of INSERT would be allowed execution in mysqld. In like fashion, either version of INSERT would be recorded AS IS...
Best Answer
What's the exact version of 5.5 you're running. If you're running 5.5.24 you might be running into this bug https://bugs.launchpad.net/percona-server/+bug/1008278.
If you are running 5.5.24 ensure userstat=OFF. Really though this is annoying so you might look at just upgrading beyond that.