While Marco's answer explained all the details correctly, I just want to focus on your last question/summary:
Is it a good idea to set up SSD + HDD in same pool, or is there a better way to optimize my pair of drives for both speed and capacity?
ZFS is a file system designed for large arrays with many smaller disks. Although it is quite flexible, I think it is suboptimal for your current situation and goal, for the following reasons:
- ZFS does no reshuffling of already written data. What you are looking for is called a hybrid drive, for example Apple's Fusion Drive allows to fuse multiple disks together and automatically selects the storage location for every block based on access history (moving data is done when there is no load on the system or on rewrite). With ZFS, you have none of that, neither automatically nor manually, your data stays were it was written initially (or is already marked for deletion).
- With just a single disk, you give up on redundancy and self-healing. You still detect errors, but you do not use the full capabilities of the system.
- Both disks in the same pool means even higher chance of data loss (this is RAID0 after all) or corruption, additionally your performance will be sub par because of the different drive sizes and drive speeds.
- HDD+SLOG+L2ARC is a bit better, but you need a very good SSD (better two different like Marco said, but a NVMe SSD is a good and expensive compromise) and most of the space on it is wasted: 2 to 4 GB for the ZIL are enough, and a large L2ARC only helps if your RAM is full, but needs higher amounts of RAM itself. This leads to sort of catch-22 - if you want to use L2ARC, you need more RAM, but then you can just use the RAM itself, because it is enough. Remember, only blocks are stored, so you do need not as much as you would assume by looking at plain files.
Now, what are the alternatives?
- You could split by having two pools. One for system, one for data. This way, you have no automatic rebalance and no redundancy, but a clean system which can be extended easily and which has no RAID0 problems.
- Buy a second large HDD, make a mirror, use the SSD like you outlined: removes the problem of differently sized disks and disk speeds, gives you redundancy, keeps the SSD flexible.
- Buy n SSDs and do RAIDZ1/2/3. Smaller SSDs are pretty cheap nowadays and do not suffer slow rebuild times, making RAIDZ1 interesting again.
- Use another file system or volume manager with hybrid capabilities, ZFS on top if needed. This is not seen as optimal, but neither is working with two single disk vdevs in a pool... at least you get exactly what you want, and some nice things of ZFS (snapshots etc.) on top, but I wouldn't count on stellar performance.
First: remember to take the new drive offline and be sure that it's not mounted or in use in any way.
Copy partition tables from old disk ada0
to new disk ada3
:
% doas gpart backup ada0 | doas gpart restore -F ada3
Now ada3
has same three partitions as ada0
:
% doas gpart show ada3
=> 40 3907029088 ada3 GPT (1.8T)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 3902832640 3 freebsd-zfs (1.8T)
3907028992 136 - free - (68K)
Remove old ZFS metadata (notice partition p3):
% doas dd if=/dev/zero of=/dev/ada3p3
Replace drive (notice partition p3):
% doas zpool replace -f zroot 15120424524672854601 /dev/ada3p3
Make sure to wait until resilver is done before rebooting.
If you boot from pool 'zroot', you may need to update
boot code on newly attached disk '/dev/ada3p3'.
Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
Run the mentioned command to update boot information on the new disk:
% doas gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3
partcode written to ada3p1
bootcode written to ada3
UUID's are now different:
% gpart list ada0 | grep uuid | sort
rawuuid: 7f842536-bcd0-11e8-b271-00259014958c
rawuuid: 7fbe27a9-bcd0-11e8-b271-00259014958c
rawuuid: 7fe24f3e-bcd0-11e8-b271-00259014958c
% gpart list ada3 | grep uuid | sort
rawuuid: 9c629875-c369-11e8-a2b0-00259014958c
rawuuid: 9c63d063-c369-11e8-a2b0-00259014958c
rawuuid: 9c66f76e-c369-11e8-a2b0-00259014958c
% gpart list ada0 | grep efimedia | sort
efimedia: HD(1,GPT,7f842536-bcd0-11e8-b271-00259014958c,0x28,0x400)
efimedia: HD(2,GPT,7fbe27a9-bcd0-11e8-b271-00259014958c,0x800,0x400000)
efimedia: HD(3,GPT,7fe24f3e-bcd0-11e8-b271-00259014958c,0x400800,0xe8a08000)
% gpart list ada3 | grep efimedia | sort
efimedia: HD(1,GPT,9c629875-c369-11e8-a2b0-00259014958c,0x28,0x400)
efimedia: HD(2,GPT,9c63d063-c369-11e8-a2b0-00259014958c,0x800,0x400000)
efimedia: HD(3,GPT,9c66f76e-c369-11e8-a2b0-00259014958c,0x400800,0xe8a08000)
Drive is now resilvering:
% zpool status zroot
pool: zroot
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Sep 29 01:01:24 2018
64.7G scanned out of 76.8G at 162M/s, 0h1m to go
15.7G resilvered, 84.22% done
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
replacing-3 OFFLINE 0 0 0
15120424524672854601 OFFLINE 0 0 0 was /dev/ada3p3/old
ada3p3 ONLINE 0 0 0
After resilver:
% zpool status zroot
pool: zroot
state: ONLINE
scan: resilvered 18.6G in 0h7m with 0 errors on Sat Sep 29 01:09:22 2018
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0
ada3p3 ONLINE 0 0 0
errors: No known data errors
Best Answer
I don't think the installer can do what you want yet (although it's getting better over time), so you could try booting the installation image, and run a root shell from the initial menu. You can then use
gpart
,zpool
andzfs
to configure your disks by hand and install the system from the archives on the image.There are numerous guides around the Internet, but I find that Matthew Seaman's is the best for my needs. It describes a mirrored root-on-zfs setup that supports boot environments (I use a slightly modified version of the
sysutils/beadm
port to manage my boot environments). It doesn't talk about configuring log and cache devices, but it should give you enough information to get the OS installed as you want it, and you can then add logging and cache devices after the fact.There are also some good resources linked from the RootOnZFS page on the FreeBSD wiki.
Whichever guide you decide to follow, personal experience suggests that you allow yourself time to run through it a couple of times to get the feel for it and to ensure you understand your config, before you commit the box to a production environment.