You could use a sharded cluster to distribute the collections more evenly across your available hosts. In general I would recommend not going beyond 5 shards (with at least 2 data bearing nodes for redundancy, plus an arbiter to break ties).
Once you have your 5 shards (or however many you end up with), you can then "pin" the collections to a particular shard with tagging. I could explain that process in detail, but as usual, Kristina Chodorow has beaten me to it:
http://www.kchodorow.com/blog/2012/07/25/controlling-collection-distribution/
Although it is not required, and you can have a single database if you wish, I would suggest having more than a single database in order to benefit as much as possible. Locking is at the database level, so concurrent writes will still be competing if you have more than one collection on a shard in the same database.
In fact, there is a simpler solution to this than tagging (though tags are the most scalable solution). If you create separate databases for all of your collections (or at least a subset), you can have each database reside on a different shard and not shard the collections at all.
To explain: without enabling sharded collections, all data for a given database will reside on the "primary" shard for that database (primary shards are designated in a round-robin fashion). Hence if you create 5 databases on a 5 shard cluster (for example) each will have a different shard for its primary and you will have achieved a crude (but effective) distribution of load by using separate databases. I added more detail regarding this in a previous answer.
So assuming that node(n) is a physical box, and the host(n) is a mongod process...If the link between the nodes fails, nothing cool will happen :).
node1 will continue operating as normal, and node2 will just sit there asking wtf. The regular secondary will only have 2/4 votes which is not the strict majority.
If node1 fails, then you will have to manually configure the secondary on node2 as the primary.
When node1 comes back online, the previous primary will come back online as a secondary and it will detect any unfinished writes, and do a rollback which will require manual intervention.
Keeping in mind that you need a strict majority to elect a new primary, I would try putting the arbiters on their own system that is independent of the two nodes.
Best Answer
The first part is the easiest answer - don't use master-slave, it has been deprecated for a couple of years at this point and will be removed in the future (once a few long standing limitations on replica sets are removed).
The second part, how to run with 2 nodes in a replica set is a bit more complicated. The simplest answer is don't, running one host with a data bearing node and an arbiter to get to 3 nodes is actually preferable. Consider these scenarios for a 2 node set:
Now, lets add an arbiter on Host 1 and see how that impacts things:
In fact, if you make the second node non-voting it's actually better than having it as a regular replica set node and gives you the same list of scenarios as having an arbiter on Host 1.
Strictly speaking, the answer to your question is that best practices suggest not to run with two nodes at all, since it actually makes the set more fragile from an operational perspective (though at least you have a copy of your data on another host for disaster recovery).
If you have to stick with 2 data bearing nodes, the best practice recommendation is to run an arbiter on one of the hosts, with the non-voting option as your alternative.