I'm using ZFS on FreeBSD to store several TB of data.
If stored as non-dedup, about 25% of the raw data would be unique enough that compression helps but dedup is wasted.
The other 75% contains a lot of dedup-able data, and I've had ratios of 2x – 8x with this dataset in the past. So my NAS was specced from the start to be able to handle compressed dedup if needed: 96GB 2400 ECC (more can be added if stats show dedup table pressure), 3.5GHz quad core Xeon, mirrored disks, NVMe L2ARC, and Intel P3700 NVMe ZIL.
The raw pool capacity is currently 22 GB before formatting (3 x 6TB vdevs + 1 x 4TB vdev) and intuitively I think I'm physically using about 7 – 14 TB of it right now. It contains both Samba file share datasets and fixed-size ESXi iSCSI zvols (mostly empty, at least one sparse). But because I don't understand the difference between these outputs, they are confusing me, and I'm not sure how much free space I actually have, and therefore whether I want to add more disks to keep it below my target of 65% usage:
# zpool list -v
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 19.9T 14.0T 5.93T - 53% 70% 2.30x ONLINE /mnt
mirror 5.44T 4.18T 1.26T - 59% 76%
gptid/6c62bc1a-0b7b-11e7-86ae-000743144400 - - - - - -
gptid/94cad523-0b45-11e7-86ae-000743144400 - - - - - -
mirror 5.41T 4.38T 1.03T - 62% 80%
ada0p2 - - - - - -
gptid/e619dab7-03f1-11e7-8f93-000743144400 - - - - - -
mirror 5.44T 4.12T 1.32T - 56% 75%
gptid/c68f80ae-01da-11e7-b762-000743144400 - - - - - -
da0 - - - - - -
da1 - - - - - -
mirror 3.62T 1.31T 2.32T - 29% 36%
da3 - - - - - -
da4 - - - - - -
# zdb -bDDD tank
DDT-sha256-zap-duplicate: 39468847 entries, size 588 on disk, 190 in core
[duplicate bucket data cut as it isn't relevant and repeats in the totals below]
DDT-sha256-zap-unique: 60941882 entries, size 526 on disk, 170 in core
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 58.1M 1.21T 964G 1005G 58.1M 1.21T 964G 1005G
2 25.0M 1.10T 784G 807G 58.5M 2.69T 1.87T 1.92T
4 10.4M 393G 274G 282G 48.4M 1.85T 1.29T 1.34T
8 1.70M 51.1G 37.7G 39.7G 16.5M 487G 353G 372G
16 456K 9.85G 5.73G 6.44G 10.1M 212G 121G 138G
32 67.0K 1.73G 998M 1.07G 2.77M 77.1G 44.6G 48.6G
64 23.7K 455M 327M 350M 1.98M 36.1G 25.8G 27.7G
128 3.47K 75.7M 48.0M 54.5M 557K 12.1G 7.68G 8.70G
256 610 46.9M 12.3M 13.6M 216K 16.9G 4.14G 4.61G
512 211 14.8M 2.46M 3.01M 145K 10.2G 1.72G 2.10G
1K 57 1.10M 38K 228K 77.7K 1.45G 49.3M 311M
2K 42 456K 22K 168K 118K 1.17G 61.3M 474M
4K 18 108K 9K 72K 104K 574M 52.1M 417M
8K 11 128K 5.50K 44K 117K 1.29G 58.3M 467M
16K 7 152K 4K 28K 155K 2.60G 85.6M 619M
128K 1 16K 512 4K 137K 2.14G 68.4M 548M
256K 1 4K 512 4K 302K 1.18G 151M 1.18G
Total 95.8M 2.76T 2.02T 2.09T 198M 6.59T 4.65T 4.83T
dedup = 2.31, compress = 1.42, copies = 1.04, dedup * compress / copies = 3.15
-
The first output seems to be saying that the formatted pool capacity is 19.9TB (sounds about right) of which space in use is around 14TB and 5.93TB is spare. If so, I'll add more disks.
-
The second output seems to be saying that the actual allocated physical space is around 2.02TB (or 6.59TB with 3.15x saving due to compression+dedup).
The two numbers are wildly different, and I don't understand how to reconcile them.
Hint appreciated please!
Best Answer
The
zpool
output is correct.The other command you might be interested in is
zfs list
.