SSD Optimization – Best Practices for SSD Performance

optimizationssd

I know this has been discussed many times, but there are a lot of different opinions across the internet which optimizations are good for SSDs (and whether to use them). Also the technology has advanced and some of the advices might have become obsolete.

Overprovisioning and free space on FS

This seems to be still relevant, but according to user cabirum in ycombinator discussion:

you don't have to over-provision unpartitioned space AND preserve 20% of partitioned free space. It's one or another, the point is to have enough free space for proper wear leveling.

on the other hand, there is no mention about it in ArchWiki or in this post by namhuy. What more easylinux advises both!

Noatime, nodirtime and relatime mount options

namhuy advises both, ArchWiki and easylinux advises only noatime and user andmarios in ycombinator says:

noatime: this is old, use relatime.

which is according to man pages default behaviour since Linux 2.6.30.

Trim

This is probably greatest mess

  • easylinux Run trim from /etc/rc.local. Do not use discard mount option.
  • ArchWiki Use fstrim.service and fstrim.timer. Warns about discard.
  • namhuy and simoncion in ycombinator Use discard option.

Limit the write actions

ArchWiki, namhuy, easylinux advises to move browser cache to RAM. This is generally disagreed with in ycombinator.

IO scheduler

It seems that everyone agrees to use Deadline or NOOP instead of default CFQ. But it is not clear to me when to use Deadline and when NOOP (Is it file system/SSD vendor dependent?).

Swap

It was not so long ago, that someone told me to disable swap completely (wow :D). According to ArchWiki, namhuy and easylinux set vm.swappiness=1.

I am a bit confused from all these options. So far I have used only a few of them. Did I failed to mention something important? Does some Linux distro do some of the above automatically?


References:

Best Answer

Re overprovisioning - all you need to ensure is that the SSD itself has a sufficient number of blocks that it knows are unused. It's unimportant whether it knows that because a) they're unused because they're in unpartitioned space so have never been written to by the OS, or b) they've had zeros written to them and the SSD firmware implements hueristics to detect that and consider them unallocated, or c) they've been the target of a DISCARD ('trim') operation. Any (and only) one of these is highly advisable.

Re noatime: I find that I personally don't care about files' last-access times, and no software I use seems to care either. So I mount everything with 'noatime'. There are vague references on the Internet about unnamed programs that malfunction if 'noatime' is used, but I've never seen such a program.

Re trim/discard: You should run fstrim periodically. It doesn't matter how it is invoked, but it does matter how frequently it is invoked. Running it at each boot, eg using rc.local, would probably be excessive, unless you reboot very infrequently or you use, then free up, disk space very frequently, or both. Do not mount with 'discard', because it causes the kernel to perform TRIM operations close to the time blocks are freed, which is probably a time when you are likely to notice the increased latency it causes. You are less likely to notice or care about a cron job running at (let's say) 3am. I imagine that once a month would be more than sufficient for an average desktop workload, or once a week for a write-heavy desktop workload. I don't know of any perfect way to know when an fstrim is advisable, because the details of block allocation are usually hidden by the drive firmware. If you observe significant slowing of the drive's performance, an fstrim would be a good thing to try. If you don't notice a slow-down, the you probably don't need to do anything.

Re I/O scheduler - benchmark the workloads you care about. There is no substitute for empirical evidence.

Re swap - RAM is quite cheap nowadays, so I and my employer buy large amounts of it - at least 16GB per machine I build for home use, and at least 256GB in the servers at work. For all workloads on all machines I encounter both at home and at work, everything fits comfortably in RAM, with plenty of room to spare for cache. Thus I disable swap at home and at work. Additionally, using swap would cause a performance decrease which would be unacceptable both to me and to our users, and which would therefore cause me or my employer to urgently go and buy even more RAM. So I never want to use swap - it attempts to hide a lack-of-RAM problem that I'd rather solve. I can't comment on your position. I imagine it could be similar.

Lastly, I disable, or even uninstall, many services which are installed and enabled by default on popular Linux distributions. This saves some virtual memory, but perhaps more importantly, it 'hardens' the machine against attack. If this is done religiously, there should be little to nothing worthless in RAM that could be swapped out to disk without sacrificing performance.

Related Question