This page demonstrates that defragmentation is beneficial for SSDs. I suppose this type of defragmentation has to be trim-aware, so as to preserve the life time of the disk.
How can I do that on linux?
EDIT: Comments pointed out that this article's content is questionable.
Best Answer
In general you can just ignore fragmentation altogether. More so for SSD which do not suffer from seek times like HDD. Defragmenting a SSD will do nothing except waste write cycles.
Although there may be extreme cases where fragmentation has a noticable effect, such as a sparse file written to in random order (as some BitTorrent clients do), or when the disk runs out of free space, when the last file that was written to will be split up in thousands of fragments as there was no other consecutive space available to fit the needs.
But that's the exception. It doesn't happen usually. Most filesystems are very good at avoiding fragmentation, and the Linux kernel is good at avoiding ill effects caused by fragmentation. Once more than one process read/write files concurrently, the disk will have to be everywhere at once anyway.
There aren't too many defragmentation solutions for Linux. XFS has
xfs_fsr
which works great, so if you absolutely want to use defragmentation, XFS is a good choice.You can check file fragmentation using
filefrag
orhdparm
:If that doesn't give you hundreds or thousands of extents (fragments), it's nothing to worry about.
A generic defragmentation method is to make a copy of the file and then replace the original with it, such as:
Your filesystem should have a good amount of free space as otherwise the probability is high that the new file will just be as fragmented as the old one. (Check if the new file is better than the old one before replacing).
But as I said, there is usually no need to do this unless a file got a really bad case of fragmentation somehow.