For the sake of simplicity, I recommend MySQL Circular Replication only. Here is why:
There are many technologies and topologies that are far superior to MySQL Circular Replication. My favorite, hands down, is DRBD (Distributed Replicated Block Device). However, DRBD works great when the Server Pair is in the same bulding, data center, and rack. It's even better when using a crossover cable in the 192.168.x.x. subnet between the DRBD Primary and DRBD Secondary. Unfortunately, DRBD has horrible performance over a distance between two locations, although DRBD can still work. There are no network topologies around to give you the satisfactory DRBD performance needed between two datacenters.
Once you setup MySQL Circular Replication between the two DB servers in two different data centers, the only tuning needed is for the network. In essence, the replication performance is a function of network settings (speed/latency of binary log transmission in MySQL Replication Setup) and disk I/O (DRBD).
An alternative you may want for better redundancy is the following for the sake of example:
Setup a DRBD Pair in both locations
DRBD Pair in Site #1 with VIP 111.111.111.111
DRBD Pair in Site #2 with VIP 222.222.222.222
Setup MySQL Circular Replication between the DRBD Primary Servers under these conditions:
For site #1, use 222.222.222.222 as the Master_Host in MySQL
For site #2, use 111.111.111.111 as the Master_Host in MySQL
Although introducing a level of complexity, now you have two levels of redundancy: DRBD within each site and MySQL Circular Replication between sites. You have the additional benefits of running backups via mysqldump on the DRBD Primary of the hot standby server.
As for failover, DRBD provides automatic failover at any one site.
Only in the event that a datacenter is totally unavailble would you using the DB VIP at the hot standby site.
UPDATE
I just did a double take and noticed that you are using Drupal6. I am glad you will be converting all the drupal tables to InnoDB. This will remove any chance of MyISAM table updates causing table locks to freeze DB Connections that are simply reading MyISAM tables. Any DML update (INSERTs, UPDATEs, DELETEs) against a MyISAM table WILL ALWAYS DO A FULL TABLE LOCK !!! Using InnoDB will introduce row-level locking, which eliminates full table locks.
In addition, DRBD becomes your friend when everything is InnoDB because crash recovery will be consistent between the DRBD Pair. Contrawise, DRBD with MyISAM buys you nothing because a crashed MyISAM table on the DRBD Primary is simply duplicated to the DRBD Secondary as, you guessed it, a crashed MyISAM table.
UPDATE #2
You should use two levels of redundancy
Level 1 : At each database center, use DRBD.
http://dev.mysql.com/doc/refman/5.1/en/ha-drbd.html
Set up a pair of DB Servers
Startup DRBD
Startup MySQL on the DRBD Primary
This creates redundant data at the disk level.
Level 2 : You should setup MySQL Circular Replication between
the DRBD Primary of DataCenter#1 and the DRBD Primary of DataCenter#2
Each DRBD Primary will be running MySQL and will act
as both Master and Slave to Each Other
I have setup for clients topologies like this and I consider it quite stable.
The direct answer to your question is Yes, but it depends on the version of MySQL you are running. Before MySQL 5.5, replication would operate as follows:
- Master Executes SQL
- Master Records SQL Event in its Binary Logs
- Slave Reads SQL Event from Master Binary Logs
- Slave Stores SQL Event in its Relay Logs via I/O Thread
- Slave Reads Next SQL Event From Relay Log via SQL Thread
- Slave Executes SQL
- Slave Acknowledges Master of the Complete Execution of the SQL Event
As of MySQL 5.5, using Semisynchronous Replication, now replication would operate as follows:
- Master Executes SQL
- Master Records SQL Event in its Binary Logs
- Slave Reads SQL Event from Master Binary Logs
- Slave Acknowledges Master of the Receipt of the SQL Event
- Slave Stores SQL Event in its Relay Logs via I/O Thread
- Slave Reads Next SQL Event From Relay Log via SQL Thread
- Slave Executes SQL
- Slave Acknowledges Master of the Complete Execution of the SQL Event
This new paradigm will permit a Slave to be closer sync'd to its Master.
Notwithstanding, latency within the network could hamper MySQL Semisync Replication to the point where it reverts back to the old-style asynchronous replication. Why ? If a timeout occurs without any slave having acknowledged the transaction, the master reverts to asynchronous replication. When at least one semisynchronous slave catches up, the master returns to semisynchronous replication.
UPDATE 2011-08-08 14:22 EDT
The configuration of MySQL 5.5 Semisynchronous Replication is straightforward
Step 1) Add these four(4) lines to /etc/my.cnf
[mysqld]
plugin-dir=/usr/lib64/mysql/plugin
#rpl_semi_sync_master_enabled
#rpl_semi_sync_master_timeout=5000
#rpl_semi_sync_slave_enabled
Step 2) Restart MySQL
service mysql restart
Step 3) Run these commands in the MySQL client
INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
Step 4) Uncomment the three rpm_semi_sync options after the plugin-dir option
[mysqld]
plugin-dir=/usr/lib64/mysql/plugin
rpl_semi_sync_master_enabled
rpl_semi_sync_master_timeout=5000
rpl_semi_sync_slave_enabled
Step 5) Restart MySQL
service mysql restart
All Done !!! Now just setup MySQL Replication as usual.
Best Answer
PXC (and other Galera solutions) require one network hop at COMMIT time. If your application can combine writes into a single transaction, that will help at some level. However, still each COMMIT will take 300ms (or whatever it is).
Regular MySQL can do Master-Master and have both writable. That replication is asynchronous. However, there are a number of caveats. Auto_increments should be configured with auto_increment_offset and _increment. UNIQUE indexes are problematic; the app must figure out how to avoid 'simultaneous' insertion of the same UNIQUE value on both continents. Note that replication will simply stop until you manually do something.
Another possibility is NDB Cluster. However, that has a lot of differences in the syntax and functionality. The advantage is that it is "eventually consistent". You set up rules about what to do if, say, duplicate keys are inserted.
There's an old saying: You can have any of X or Y or Z; pick 2. So, I ask, what are your 2 reasons for wanting multi-Master? And what 3rd item are you willing to sacrifice. (It sounds like latency is not #3.)