In SQL Server 2012 you can have read-only secondaries, which allow you to use Availability Groups to mirror a database or set of databases to another server, and perform read-only queries against them even as they are being mirrored.
The downside: this requires Enterprise Edition on both nodes, which can only be licensed per core (not CAL), and the underlying OS needs to be WSFC, so the OS needs to be Enterprise Edition as well (this is less of a big deal since the additional cost of Enterprise at the OS level is laughable compared to the jump in SQL Server license costs).
In previous versions you could do this with log shipping (subject to the annoying limitations @datagod already mentioned) or with mirroring + snapshots (which also needs Enterprise Edition). I haven't seen any of our customers using replication for this specific requirement but I suppose that is possible as well.
You really need to define what level of "HA" you are looking for quantitatively -- one man's "can-sleep-at-night' is another man's "this-thing-is-a-house-of-cards".
The minimum number of systems is two (2) -- An active and a standby replica (with something like heartbeat or custom-grown scripts to handle the failover).
With a MySQL cluster this means at least two SQL nodes and two Data nodes (to continue serving requests in the event of a failure of any one node). (If you also need redunancy on the management server's functions you would need two of those as well).
The key part here is testing the failover in a development environment -- which means you need at least two more machines (or a virtual machine). You also want to test upgrade and maintenance processes to ensure they won't trigger unintended consequences (Ideally you should do nothing to production that hasn't been tested and proven in Development).
If you fail to properly test you may trigger failover, which means you incur the procedural (and possibly business) cost of a failover -- typically having to rebuild the former active server to be a new standby server.
This protects against hardware failures (power supply, NIC, disk, switch (if they're on separate switches).
Note that this doesn't just apply to your DB servers -- You need two of *EVERYTHING: Web servers, DBI servers, Firewalls, DNS servers...
Redundancy of one component is meaningless if you still have a bunch of single-points-of-failure in your stack.
The next level of protection is network failures ("What if my ISP goes down?") - this requires replicating your whole redundant environment above to a remote datacenter.
What's important here is that you diversify network connections, power, etc -- You don't want your standby datacenter across the street where it's fed by the same power and fiber as your main facility.
A company I consulted for had a requirement that any remote facility used for DR be "at least 15 degrees of longitude away" (i.e. "In the next time zone"). A common practice in the US is East-Coast/West-Coast, or NY/Chicago LA/Texas.
The next level above that is truly distributed resources (think Google) which requires a database system that supports replication and sharding (think MongoDB).
If implemented properly there's almost no chance of a true "outage", though service may occasionally be degraded and recovery can take a while.
Best Answer
PostgreSQL works differently. As soon as the data are committed on the single primary database, the write is effective. If you use synchronous replication, that commit waits until the required standby servers (configured by
synchronous_standby_names
) have received the information.