Main Volume
I know you said you don't want to build another PC fileserver, but most of the ready-to-go solutions don't have any safeguards against silent data corruption.
If you're looking for data integrity and reliability, you might want to consider running an OpenSolaris fileserver with a raidz2 or raidz3 configuration (2 or 3 parity drives, respectively) on ZFS.
With larger drives, the rebuild time will increase when a drive fails--which also increases the chances of a second-drive failure during the rebuild. But the main advantage of ZFS is that it protects you against silent data corruption, since the filesystem itself is checksummed.
You can also run ZFS on other operating systems, but OpenSolaris is always the most up-do-date version since it takes a while to port new features to the other platforms. If setting up an OpenSolaris box seems a little more work than what you want, FreeNAS seems to be the next best thing, in terms of ZFS support.
On the Linux side, ZFS is not supported in the kernel (only as a user-level driver), but there is also a new filesystem under development, called btrfs. Unfortunately, there is no stable release of btrfs, as of March 2010.
Backup
For your offsite backups, it might be more cost-effective to pay for a service like CrashPlan, Carbonite, or Mozy. It's very, very easy to configure any of these to automatically backup your files. Of the three, CrashPlan has the best backup and recovery features (and even allows you to backup to other remote computers for free), while Mozy's recovery methods are either expensive or very inconvenient (if you want to download a Mozy backup, you have to wait for your job to be queued up and bundled into a zip file). I haven't personally had any experience with Carbonite.
Note that you shouldn't depend solely on an offsite backup--if you backup to the cloud or some other offsite computer, you should also have a local backup.
The Drobo reviews I've seen noted poor write performance, but if you're just using it as a nightly backup drive, it might be sufficient.
Backup rotation
If you want to rotate backups between a local and off-site location, you need at least 3 backups to guarantee one is always local and one is always safe at the off-site location. The third is either in-transit or at one of the other two locations at any given point in time.
ROBOCOPY vs. CrashPlan
ROBOCOPY will cause more wear and tear on your hardware, since it has to read every file during every backup. It's not clear to me whether it only copies changed files or if it copies all files. If ROBOCOPY fails for some reason, it may not be apparent that it has failed, unless you have set something up to reliably report its backup status.
CrashPlan monitors your hard drive for changed files, and only copies the changed files. Since it actively monitors changes to disk, it does not need to read every file in your backup source. CrashPlan automatically e-mails you to notify you how long it has been since the last backup, and how much data was transferred during the last backup.
That said, keep in mind that CrashPlan doesn't have to replace your ROBOCOPY backup scheme. You can use CrashPlan to supplement whichever other backup scheme you happen to choose.
I have tested this with ZFS and write performance is about half what it should be, because ZFS distributes reads and writes over all vdevs (therefore dividing I/O to several places on the same disk). Thus, the speed is limited by the speed of the disk with the most partitions. Read speed seems to be equal to the disk bandwidth. Note a pair of ZFS partitions on two disks has roughly double the read speed of either single disk, because it can read from the disks in parallel.
Using MD LINEAR arrays or LVM to create the two halves results in twice the write performance compared to the above ZFS proposal, but has the disadvantage that LVM and MD have no idea where the data is stored. In the event of a disk failure or upgrade, one side of the array must be entirely destroyed and resynced/reslivered, followed by the other side. (e.g. the resync/resliver has to copy 2*(size of array))
Therefore it seems then that the optimal solution is to create a single ZFS mirror vdev across two LVM or MD LINEAR devices which combine the disks into equal-sized "halves". This has roughly twice the read bandwidth of any one disk, and write bandwidth is equal to the individual disk bandwidths.
Using BTRFS raid1 instead of ZFS also works, but has half the read bandwidth because ZFS distributes its reads to double the bandwidth, while it appears BTRFS does not (according to my tests). BTRFS has the advantage that partitions can be shrunk, while they cannot with ZFS (so if after a failure you have lots of empty space, with BTRFS it's possible to rebuild a smaller redundant array by shrinking the filesystem, then rearranging the disks).
This is tedious to do by hand but easy with some good scripts.
Best Answer
Resurrecting this old question with something that finally works in the upcoming Windows 10 and Windows Server 2016 OSes.
Microsoft have added a
Optimize-StoragePool
PowerShell command in Windows 10 and Windows Server 2016 that rebalances storage spaces for an entire pool.It's as easy as opening an Administrative PowerShell console and running
Optimize-StoragePool -FriendlyName "TheNameOfYourStoragePool"
I blogged about it here.
Microsoft announced the feature just a few days ago as part of the new Storage Spaces Direct but it works just fine with normal Storage Spaces as well.