Here is the topology you just described
+-------------+
| ^
| |
V |
M1 --> M2 --> M3
|
|
+----> S1
You would like to slip S1 as a Master into the Replication Ring so that it looks like this:
+---------------------+
| ^
| |
V |
M1 --> S1 --> M2 --> M3
Essentially, you only have to prep S1 to a Master and Point M2 to receive binary log entries from S1.
OK Here we go
STEP 01) Point your application at M3
STEP 02) Prep S1 to be a Master
- set the
server_id
as a different number from M1, M2, M3
- set
log-slave-updates
in my.cnf like you did for M1, M2, M3
- enable binary logging on S1 the same way you enabled it on M1, M2, M3
- restart mysql on S1
STEP 03) run STOP SLAVE;
on M1
STEP 04) run SHOW SLAVE STATUS\G
on S1 and M2
Make sure Seconds_Behind_Master
is 0 on S1 and M2
STEP 05) run SHOW MASTER STATUS;
on S1 (Record the binary log and position)
STEP 06) run this on M2
STOP SLAVE;
CHANGE MASTER TO master_host='IP Address of S1',
master_post=3306,
master_user='repluser',
master_password='replpass',
master_log_file='XXXX'
master_log_pos=YYYY;
START SLAVE;
SHOW SLAVE STATUS\G
where XXXX is the binary log from STEP 05 and YYYY is position from STEP 05
If the SHOW SLAVE STATUS\G
says Yes
for Slave_IO_Running
and Slave_SQL_Running
then you have achieved this:
+---------------------+
^
|
|
M1 --> S1 --> M2 --> M3
STEP 07) run START SLAVE\G
on M1
STEP 08) run SHOW SLAVE STATUS\G
on M1, S1, M2, M3
Where Seconds_Behind_Master
is 0 on all the servers...
STEP 09) Point you application to other servers as desired
Any questions ???
If none
Give it a Try !!!
Your mission, should you decide to accept this, is to practice this is a Dev/Staging Environment and make sure you trust this algorithm before doing this in Production.
In the event your data is caught or killed, the DBA StackExchange and I will disavow any knowledge of your actions.
I wrote up an interesting layout last year which features DRBD pairs in two data centers (DC1,DC2) with as follows
- DRBD Pair in DC1 (db1 and db2)
- DBVIP for Primary of DRBD Pair1 is 10.1.2.30
- DRBD Pair in DC2 (db3 and db4)
- DBVIP for Primary of DRBD Pair2 is 10.1.2.40
- Have MySQL Circular Replication Between DRBD Primaries
- Have the 10.1.2.40 as Master_Host for DBRD Pair 1
- Have the 10.1.2.30 as Master_Host for DBRD Pair 2
MySQL high availability, failover and replication with Latency
MySQL Replication : 1 Slave / Multiple Masters
Here is why I suggested this: Using two data centers, you setup automatic failover for DRBD Pair in one data center. Let the other DRBD Pair in the second data center be for DB disaser site with it own local redundancy and failover. Should you ever loses one data center, the other data center is fully read with it own local failover setup. Your app would just have to use the DBVIP of the other database center in such a catastrophic case.
Please keep in mind that using DRBD in conjunction with MySQL is only beneficial if all of your data uses the InnoDB Storage Engine. Hard failovers in DRBD could easily result in crashed MyISAM tables.
Here is another setup to consider:
As with DRBD setups, any DRBD Secondary would provide just a Disk-Level copy of your MySQL Folder. It is available as a warm standby. MySQL is not being run on the DRBD Secondary. If you want the third DB server to become hot standby, ditch DRBD altogether and use pure MySQL Replication. With three DB servers, using db3 at a remote site, simply setup the following:
+--> db1 --> db2 --> db3 -->+
^ |
| V
+<--------------------------+
Using your rudimentary failover, now you have two hot standby servers. You just have to make sure each DB server has a unique server_id value. I also recommend using MySQL 5.5 because it uses SemiSync Replication which is more sensitive to communication failures and stop Replication better. You will have to setup the appropriate heartbeats and timeouts.
Best Answer
There are four(4) options you need to combine
Option 1: MySQL in Star Topology
So as not to reinvent the wheel, please read my past posts about this subject
Option 2: Use DRBD
Option 3: Use Semisynchronous Replication
Any network latency can affect the collecting of binary logs and its shipping of entries over a network could be bottleneck. Using MySQL 5.5's Semisynchronous Replication can allow you to tune the MySQL Heartbeat characteristics so as to minimize replication losing its place due to any significant network latency.
Option 4: Circular Replication
If you have multiple masters, circular replication is a must. There are many products out there for providing circular replication, such as mysql-mmm (mentioned in the comment), which is great within the confines of a single datacenter. This is only one piece of your puzzle in that four(4) things must be accommodated outside of just circular replication:
CAVEAT
This may not be your full answer but these options can provide ideas you can try out in different scenarios. For example, when in the star topology, you could do the following:
SCENARIO #1
SCENARIO #2
SCENARIO #3
As far as the possibilities go, it's up to your imagination and department budget.
Links for MySQL and DRBD
DRBD for Disk Level Redundancy and ucarp for DBVIP Automatic Failover