The only time that moving the journal is an absolute recommendation is if you have to use a direct NFS mount - NFS is not recommended for MongoDB in general but in particular it does not play well with the journal.
In general, the journal will have quite a different access pattern to the rest of your data (sequential versus random access). Hence it is often a good idea to separate the journal and the data from a performance perspective. Note that this is a very broad generalization and will vary depending on your usage of MongoDB, but will generally be true.
Since your questions are largely about EC2 and EBS then NFS will not play a part here, but I thought it worth mentioning in a broader context, onto specifics.
For EC2/EBS, you will generally only need to separate the journal out if you are seeing write IO contention (or overall IO contention given the nature of EBS) - it will move IO to a different disk and free up some capacity on the data disk. Of course, with EBS your IO is also dependent on the network IO available on the instances and that is dependent on your instance size (and whether you have opted for P-IOPS), hence a lot of variables.
If you are seeing high IOWait times and your disk looks write bound (see IOStat), this is something you should consider, but only as a temporary measure, because it will only be a small tweak compared to increasing available IO. There are plenty of options available there depending on your starting point, like adding more EBS drives to the RAID or by taking advantage of P-IOPS provisioning. You can also occasionally get better performance by changing instance type so that you have less network contention. Each of these may be more effective than moving the journal and have less headaches.
To explain, there are other considerations here - snapshots for one. Thanks to the journal you no longer have to fsync and lock the database to get a consistent snapshot (be it EBS or LVM or other). However the journal has to be included in the snapshot for that to be the case. Hence whatever node you use for backing up, if you intend to snapshot without taking downtime for that node, then you need to make sure the journal is included.
Finally, one of the uses of the journal is to facilitate recovery from an unclean shutdown, such as an OS level crash/reboot. If the journal is on the ephemeral disk in EC2, it will be blown away by such a reboot and hence not be useful in that context. Any such crash/reboot with such a configuration would therefore require a resync from scratch or a restore from backup/snapshot should it occur.
Overall, like most configuration decisions, you have to weigh the pros and cons and pick the solution most relevant for your use case. Hopefully this will give you enough information to make an informed choice.
Best Answer
You can combine it on 1 disk, if you wish. Not obligated to split.
Journal
Journal will take 3GB (or less than 400MB if you use --small-files option)
Journal + Pre-Allocation
Be aware. If you don't use --small-files, then at least 8GB (journal and oplog included) will be pre-allocated to your disk. This is not lost space, but just reserved to improve the speed of mongo. Using --small-files, only 1.4GB will be preallocated.
For discovering and testing purposes. Start with --small-files.
Logfiles
Logfiles will depend on the verbosity and insensitivity of the system. But for as you speak about a 8GB data disk. Then it won't be that much. Default is only some system messages and errors. (http://docs.mongodb.org/manual/reference/configuration-options/)
To let logfiles rotate, send "kill -SIGUSR1 pid" or mongo --port 27017 --eval "db.runCommand({logRotate:1});" admin (http://docs.mongodb.org/manual/reference/command/logRotate/). And then I delete daily via a crontab the logs older than 3 days.
Splitting on different drives
The reason is to optimize the disk for the purpose. Journal is a capped collection that just writes in sequence. So less IOPS needed. And logs are logs, just adding information. And then Data, well, reading jumps around a lot, and writing sometimes also, filling up gaps that were freed. And while journal and log are written away on other disks, the data disk doesn't lose time on that. Every little bit can help on intensive systems. The next step then is Replication and Sharding to spread the load.
On the https://university.mongodb.com, you can get more information about this if you are interested. Following M202 MongoDB Advanced Deployment and Operations now, what offers some specific information to optimize.