How important is the 1GB RAM per 1TB disk space rule for ZFS

freenasmemorynaszfs

I'm planning on building my first NAS box and currently I'm considering FreeNAS and ZFS for it. I read up on ZFS and it's feature set sounds interesting, although I will probably only use a fraction of it.

Most guides say that the recommended rule of thumb is that you need 1 GB of (ECC-)RAM for every TB of disk space in your pool. So my question is, what is the actual (expected) impact on ignoring this rule?

Here is a setup of someone who build a 71 TiB NAS with ZFS and 16GB RAM. According to him it run's like a charm. He uses Linux however (if this makes a difference).

So apparently you don't actually need 96 or even 64 Gigs of RAM to run such a large pool. But the rule must be there for a reason. So what happens if you do not have the recommended amount of RAM? Is it just a bit slower or do you run the risk of losing data or accessing your data at a snails pace only?


I realize that this has also a lot to do with the features that will be used, so here are the parameters I'm considering:

  • It's a home system
  • 16GB ECC RAM (the maximum supported by the setup I have in mind)
  • No deduplication, no ZIL, no L2ARC
  • Probably with compression enabled
  • Will store mostly media files of various sizes
  • Will probably run bit torrent or similar services (frequent smaller reads/writes)
  • 4 disks, probably 5 TB each
  • Actual pool setup will probably be part of another question but I think no RAIDZ (although I would be interested to know if it actually makes a difference in this context), probably two pools with two disks each (for 10TB netto storage), one acting as backup

Best Answer

The only reason you would need to use that ratio of RAM to storage space, would be if you decided to use data deduplication. It does not say that the 1 GB to 1 TB ratio is a requirement.

According to a wiki:

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can either lower performance or result in complete memory starvation. Solid-state drives (SSDs) can be used to cache deduplication tables, thereby speeding up deduplication performance.

Source

Related Question