For the sake of simplicity, I recommend MySQL Circular Replication only. Here is why:
There are many technologies and topologies that are far superior to MySQL Circular Replication. My favorite, hands down, is DRBD (Distributed Replicated Block Device). However, DRBD works great when the Server Pair is in the same bulding, data center, and rack. It's even better when using a crossover cable in the 192.168.x.x. subnet between the DRBD Primary and DRBD Secondary. Unfortunately, DRBD has horrible performance over a distance between two locations, although DRBD can still work. There are no network topologies around to give you the satisfactory DRBD performance needed between two datacenters.
Once you setup MySQL Circular Replication between the two DB servers in two different data centers, the only tuning needed is for the network. In essence, the replication performance is a function of network settings (speed/latency of binary log transmission in MySQL Replication Setup) and disk I/O (DRBD).
An alternative you may want for better redundancy is the following for the sake of example:
Setup a DRBD Pair in both locations
DRBD Pair in Site #1 with VIP 111.111.111.111
DRBD Pair in Site #2 with VIP 222.222.222.222
Setup MySQL Circular Replication between the DRBD Primary Servers under these conditions:
For site #1, use 222.222.222.222 as the Master_Host in MySQL
For site #2, use 111.111.111.111 as the Master_Host in MySQL
Although introducing a level of complexity, now you have two levels of redundancy: DRBD within each site and MySQL Circular Replication between sites. You have the additional benefits of running backups via mysqldump on the DRBD Primary of the hot standby server.
As for failover, DRBD provides automatic failover at any one site.
Only in the event that a datacenter is totally unavailble would you using the DB VIP at the hot standby site.
UPDATE
I just did a double take and noticed that you are using Drupal6. I am glad you will be converting all the drupal tables to InnoDB. This will remove any chance of MyISAM table updates causing table locks to freeze DB Connections that are simply reading MyISAM tables. Any DML update (INSERTs, UPDATEs, DELETEs) against a MyISAM table WILL ALWAYS DO A FULL TABLE LOCK !!! Using InnoDB will introduce row-level locking, which eliminates full table locks.
In addition, DRBD becomes your friend when everything is InnoDB because crash recovery will be consistent between the DRBD Pair. Contrawise, DRBD with MyISAM buys you nothing because a crashed MyISAM table on the DRBD Primary is simply duplicated to the DRBD Secondary as, you guessed it, a crashed MyISAM table.
UPDATE #2
You should use two levels of redundancy
Level 1 : At each database center, use DRBD.
http://dev.mysql.com/doc/refman/5.1/en/ha-drbd.html
Set up a pair of DB Servers
Startup DRBD
Startup MySQL on the DRBD Primary
This creates redundant data at the disk level.
Level 2 : You should setup MySQL Circular Replication between
the DRBD Primary of DataCenter#1 and the DRBD Primary of DataCenter#2
Each DRBD Primary will be running MySQL and will act
as both Master and Slave to Each Other
I have setup for clients topologies like this and I consider it quite stable.
I'm going to highlight one of the products listed in the wiki link provided by Jack Douglas.
Galera is a product I've been hearing a lot about recently, but have not had a cause to implement it yet into a production environment.
Back in July, Percona wrote a brief blog about implementing it, but mentioned there was no 'production' release at that time.
They've since released the first major version, v1.0. And Vladimir has been shedding more light on it this month. And with some benchmarks
Due to your write-heavy environment, I highlight this use-case:
Distributing writes across the cluster will harness the CPU power in slave nodes for better use to process client write transactions. Due to the row based replication method, only changes made during a client transaction will be replicated and applying such a transaction in slave applier is much faster than the processing of the original transaction. Therefore the cluster can distribute the heavy client transaction processing across many master nodes and this yields in better write transaction throughput overall.
Disclaimer: I'm not affiliate with Galeria or Percona, and have not used it personally...yet.
Best Answer
This sounds like a good use case for Galera. http://codership.com/content/using-galera-cluster
You could also use something like this: Each server is a master for it's copy of the user table (I assume the user table is sharded?). Then there could be a central server with MariaDB or MySQL + Tungsten Replicator which is a slave of all the other servers.
And then you could create another master (extra instance on one of the servers?) which hosts the master copy of the content tables.
CM = Content Master U = User Master UC = User Combined
Server 1: CM + UC (2 Instances) Server 2-5: U
For the content tables (1 master, 4 slaves): CM -> U1 -> U2 -> U3 -> U4
For the user tables (4 masters, 1 fan-in slave): U1 -> UC U2 -> U3 -> U4 ->
Galera and MySQL Cluster provide synchronous cluster where you can write to any server. MySQL Cluster needs 2 data nodes and 2 SQL nodes as a minimal setup (the mgmt node can be combined with a SQL node). MySQL Cluster does not use InnoDB or MyISAM. Galera needs 3 nodes and does support InnoDB.
Investigate Galera, Tungsten (and MySQL Cluster aka NDB?).
Read MySQL High Availability (O'Reilly).