Please read my other answer to this question before actually using a MySQL proxy of any kind. If You have 2 master-master servers that a CMS is writing to, and 10 httpd that only read from it, You'll be fine, but (as pointed out in the other answer) that's not always the case. You've been warned.
MySQL Proxy is a simple program
that sits between your client and
MySQL server(s) that can monitor,
analyze or transform their
communication. Its flexibility allows
for unlimited uses; common ones
include: load balancing; failover;
query analysis; query filtering and
modification; and many more.
.
HAProxy is a free, very fast and
reliable solution offering high
availability, load balancing, and
proxying for TCP and HTTP-based
applications
If You would run it in TCP mode, it could be even better than Wackamole. If I had to choose between them, I would use HAProxy. Also HAProxy can have a lot of backends, Waclamole can have only 2. Note that HAProxy is "dumb", it connects sockets without any looking on what's inside the stream - dedicated MySQL Proxy might f.e. have an option to point various requests to specified servers.
You really need to define what level of "HA" you are looking for quantitatively -- one man's "can-sleep-at-night' is another man's "this-thing-is-a-house-of-cards".
The minimum number of systems is two (2) -- An active and a standby replica (with something like heartbeat or custom-grown scripts to handle the failover).
With a MySQL cluster this means at least two SQL nodes and two Data nodes (to continue serving requests in the event of a failure of any one node). (If you also need redunancy on the management server's functions you would need two of those as well).
The key part here is testing the failover in a development environment -- which means you need at least two more machines (or a virtual machine). You also want to test upgrade and maintenance processes to ensure they won't trigger unintended consequences (Ideally you should do nothing to production that hasn't been tested and proven in Development).
If you fail to properly test you may trigger failover, which means you incur the procedural (and possibly business) cost of a failover -- typically having to rebuild the former active server to be a new standby server.
This protects against hardware failures (power supply, NIC, disk, switch (if they're on separate switches).
Note that this doesn't just apply to your DB servers -- You need two of *EVERYTHING: Web servers, DBI servers, Firewalls, DNS servers...
Redundancy of one component is meaningless if you still have a bunch of single-points-of-failure in your stack.
The next level of protection is network failures ("What if my ISP goes down?") - this requires replicating your whole redundant environment above to a remote datacenter.
What's important here is that you diversify network connections, power, etc -- You don't want your standby datacenter across the street where it's fed by the same power and fiber as your main facility.
A company I consulted for had a requirement that any remote facility used for DR be "at least 15 degrees of longitude away" (i.e. "In the next time zone"). A common practice in the US is East-Coast/West-Coast, or NY/Chicago LA/Texas.
The next level above that is truly distributed resources (think Google) which requires a database system that supports replication and sharding (think MongoDB).
If implemented properly there's almost no chance of a true "outage", though service may occasionally be degraded and recovery can take a while.
Best Answer
It depends on how you use your database. If it is read-heavy but with few writes (like a blog or a news paper) you could have one mysql for writes and two for reads. You would set up so the write server is a replication master and the two for reads are slaves.
All application servers needs to know about both the write server and one of the read servers, that way when you balance the loads to application servers you automatically balance the reads between the mysql servers. It's also easy to add another mysql+application server once the need is bigger.
If you on the other hand have a write heavy site (I can't even find an example) you need to do some research on sharding. It's normally not recommended unless you really need it.