Something is not right about your process.
Usually, when I see error 1236, such the following I used in my old post How can you monitor if MySQL binlog files get corrupted?
[ERROR] Error reading packet from server: Client requested master to start replication from impossible position ( server_errno=1236).
[ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: 'Client requested master to start replication from impossible position', Error_code: 1236
111014 20:25:48 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.001067', position 183468345.
here was the situation: When doing MySQL Replication without GTID, the IO Thread examines the position from the latest Master binlog. If Read_Master_Log_Pos
is bigger than the actual filesize of the binlog, you get error 1236.
When doing MySQL Replication with GTID, the situation is somewhat similar. The IO Thread is looking for some kind of closure with regard to the GTID it was last using. When you restarted MySQL on the Master, you closed the last binlog on the Master and opened a new binlog upon startup. The IO Thread on the Slave was still active. Thus, the same error number is coming up.
The next time you restart a Master, remember the Slaves are active.
The slave should have reconnected after a minute, but that is not happening for you.
To play it safe, you should do the following
- On the Slave,
STOP SLAVE;
- On the Master,
service mysql restart
- On the Slave,
START SLAVE;
You should not have to do this. As an alternative, try setting up replication with heartbeat set at one tenth of a second:
CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD = 100;
This should make the IO Thread on the Slave a little more sensitive
Best Answer
Found it:
If I restart using
instead of
It restarts fine, and I can then login to MySQL.