The most simple way you could manage a zfs on all these drives of various sizes would be:
zpool create pool /dev/sd[abcdef]
zfs set dedup=on,copies=3,atime=off pool
Haven't tried dedup, but it seems like a cool feature. copies=3 tells zfs to store multiple copies of each file within the file system. ZFS will automatically put these copies on different disks, giving you the redundancy of raid. Not sure if this gives you the multiple-spindle performance increases of raid however. I would hope that it does. atime=off just gives a bit of read performance boost although it may break some /var/spool type things with mail. Finally - zfs checksums are on by default. I'd do this to get file level redundancy without having to go through the management headache of trying to turn all those drives into a raid by matching up partition sizes and sudh.
As of yet, there isn't a way to convert zpool structures. Also, there isn't a way to expand a RAIDz. To my knowledge, RAIDz is something you have to setup from the start.
That said, there is an exception. If you have three disks in a RAIDz configuration, basically one disk is used for redundancy. You can concatenate zpools so you can create a second three disk RAIDz and that the two work together. That way each three disk RAIDz is fault tolerant within itself. The down side is not you have to use two drive for fault tolerance where as if you build the RAIDz with six drives from the start you would only be required to use one.
There is a second exception. (I've only done this in VMware to test it.) If you have a RAIDz zpool, and want to increase the capacity, you can swap you the drives one by one with a larger capacity drive. Then after the last drive has rebuilt, (I think) you can export the pool, and import it back and ZFS will see the new capacity on the drive and begin to use it. I read this off a blog I can't locate and it was a while ago, so there may be additional steps.
Some people have considered using the copies property of ZFS to spread extra copies across a stripe zpool. Here is a site that talks about the copies property. ZFS will attempt to put the two copies on two different drives, but it doesn't have to. So the data may be fault tolerant, or may not.
I'm hoping the FreeNAS because it is built on a flavor of BSD will get the latest bits soon. OpenIndiana had the latest versions of ZFS incorporated (zpool version 28 and zfs version 5). Also, I've read ZFS has been ported to Linux (not just with FUSE).
I used to use FreeNAS because it was easy to setup. The I moved around to various OSs chasing the latest versions of ZFS mainly because I wanted the dedupe feature to extend the capacity of my storage.
I know when ZFS get in place migration between zpool types and dynamic expansion of RAIDz many ZFS people will be happy.
Best Answer
I have a Freenas box running from a Celeron processor that is just slightly faster than the average desktop Atom.
Mine handles 4 hard drives just fine - remember that file operations are not CPU intensive, that is handled separately by the I/O controller.
The CPU will only be used if you choose to use Software Raid, and on my box, I stream movies all the time and the CPU barely goes above 20%, so, I am sure you will be fine.
As for hard drive speeds, this is a very hard one. 5400RPM drives will use up less electricity, but, it is possible for it to slow down write activities across the network (read/heavy copying as well, but, it should be fast enough for streaming and regular stuff). Only you can decide this. I personally went for 7200RPM drives on mine.