I've been using FreeBSD 8.0 and subsequently 8.0-stable-February 2010 snapshot to experiment with ZFS for a couple of months. The system has a couple of independent 4-disc RAIDZ1 pools. At first things seemed to go more or less perfectly, though I've run into some increasingly disturbing problems which makes me think that under some particular circumstances and configurations it may be wise to avoid this setup. My first problem is not necessarily with the stability / functionality not of FreeBSD / ZFS overall themselves, but rather with the reliability and functionality of certain device drivers and disc drives under FreeBSD. I found that the default ata/ide driver didn't support the controller I'm using, but the siis silicon image storage driver had the needed port multiplier SATA support to make the drives work with FreeBSD 8. However upon closer inspection that driver code isn't really production ready IMHO -- it didn't gracefully handle the first disc related soft error / timeout / retry condition that caused a drive in the array to do something like delay responding for a few dozen seconds. I don't know exactly what happened, but it took around a minute for the array to timeout, reset, and reestablish operation, during which time every single drive in the array was 'lost' from operational status and resulted in an unrecoverable data fault at the higher filesystem level. AFAICT even the SIIS driver's maintainer says the driver's timeout / reset handling isn't really fully completed / robust / optimized yet. Fair enough, but the point is, no matter how good the OS or ZFS is, if you have a unreliable disc drive or controller or driver, it can certainly ruin the overall operations enough to cause fatal errors and data loss despite ZFS. Also SMART diagnostics requests don't seem to work with this particular controller driver. As for what caused the error .. flaky Seagate drives / firmware? I don't know, but having one drive error cause the whole array to 'fail' despite the RAIDZ defeats the whole point of a RAIDZ's reliability.
The behavior subsequent to the issue with zpool scrup / zpool status et. al. was also a bit suspicious, and it's not really clear whether that diagnostic / recovery process worked correctly at the ZFS/ZPOOL level; certainly I got some mixed messages about error statuses and error clearing et. al. The error indications sort of disappeared after a reboot despite the lack of an explicit zpool clear command; maybe that's intended, but if so it wasn't suggested by the zpool status output.
Potentially more seriously something seems to have SILENTLY gone wrong during operation after a few days of uptime wherein large parts of the array containing multiple file systems (ZFS) just "vanished" from being listed in (ls), and from normal I/O access. IIRC df -h, ls, etc. did not report the file systems even as existing, whereas zpool list / zpool status continued to indicate the expected amount of consumed storage in the pool, but it wasn't accounted for by any listed mounted or unmounted filesystems. /var/log/messages was not containing any error situation message, and operations had been proceeding totally normally afaict prior to that problem. zpool list / zpool status did not indicate problems with the pool. A zfs unmount -a failed with a busy indication for no obvious reason relating to interactive usage for several minutes before the last of the mounted zfs filesystems would unmount. Rebooting and rechecking /var/log/messages, zpool status, zpool list was not informative of any problem. The previously missing file systems did in fact remount when asked to do so manually, and appeared initially to have the correct contents, but after a minute or so of mounting various zfs systems in the pool it was noted that some had again disappeared unexpectedly. It is possible that I've done something wrong with defining the zfs filesystems and have somehow caused a problem but at the moment I find it inexplicable that a working system doing I/O to various zfs directories can all of a sudden lose view of entire filesystems that were working just fine minutes/hours/days ago with no intervening sysadmin commands to modify the basic zpool/zfs configuration.
Granted I'm running the Feb'10 stable snapshot which is NOT commended for production use, but then again several relatively noteworthy fixes to known ZFS/storage issues have been committed to the stable branch since 8.0 release, so, running stock 8.0 might be unsatisfactory in terms of reliability / features due to those issues for some people.
Anyway just a few weeks of fairly light testing have resulted in enough potentially disasterous reliability / functionality problems, not all of which seem to have to do with the particular deficiencies of the storage drives / controller / driver that I'm cautious about trusting FBSD 8.0 + ZFS for production / reliability use without a very carefully controlled hardware and software configuration and offline backup strategy.
OpenSolaris is a no-go right now anyway IMHO even if you wanted to run it -- afaict there are serious known problems with zfs deduplication that pretty much renders it unusable, and that and other issues seem to have resulted in a recommendation to wait for a few more patch versions to come out before trusting OpenSolaris+ZFS especially with a dedup system. B135/B136 seem to be just missing released without explanation, along with the 2010.03 major OS release. Some say Oracle is just being tight lipped about a schedule slippage, but that the expected codes will be belatedly released eventually, whereas others wonder if we'll ever see the full set of expected features in development by Sun be released as future open source versions by Oracle given the transition in ownership / leadership / management.
IMHO I'd stick with doing mirrors only, and only with very well vetted / stable storage controller drivers and disc drive models for optimum ZFS reliability under FreeBSD 8, and I'd probably wait for 8.1 even so.
As of yet, there isn't a way to convert zpool structures. Also, there isn't a way to expand a RAIDz. To my knowledge, RAIDz is something you have to setup from the start.
That said, there is an exception. If you have three disks in a RAIDz configuration, basically one disk is used for redundancy. You can concatenate zpools so you can create a second three disk RAIDz and that the two work together. That way each three disk RAIDz is fault tolerant within itself. The down side is not you have to use two drive for fault tolerance where as if you build the RAIDz with six drives from the start you would only be required to use one.
There is a second exception. (I've only done this in VMware to test it.) If you have a RAIDz zpool, and want to increase the capacity, you can swap you the drives one by one with a larger capacity drive. Then after the last drive has rebuilt, (I think) you can export the pool, and import it back and ZFS will see the new capacity on the drive and begin to use it. I read this off a blog I can't locate and it was a while ago, so there may be additional steps.
Some people have considered using the copies property of ZFS to spread extra copies across a stripe zpool. Here is a site that talks about the copies property. ZFS will attempt to put the two copies on two different drives, but it doesn't have to. So the data may be fault tolerant, or may not.
I'm hoping the FreeNAS because it is built on a flavor of BSD will get the latest bits soon. OpenIndiana had the latest versions of ZFS incorporated (zpool version 28 and zfs version 5). Also, I've read ZFS has been ported to Linux (not just with FUSE).
I used to use FreeNAS because it was easy to setup. The I moved around to various OSs chasing the latest versions of ZFS mainly because I wanted the dedupe feature to extend the capacity of my storage.
I know when ZFS get in place migration between zpool types and dynamic expansion of RAIDz many ZFS people will be happy.
Best Answer
With OpenSolaris, ZFS is usually versions/features/bug-fixes ahead.
Hardware support is getting much better with recent OpenSolaris builds but as long as your hardware is supported that shouldn't really matter.
You cannot add a single disk to a raidz but you can add another raidz to the pool where your first raidz is. The only drawback is you need to add multiple disks at the same time.
The point is ZFS doesn't lose data by design so recovering tools are of little purpose, outside the self-healing built in ones.