You don't really need to defrag Btrfs filesystems manually.
Yes, Btrfs is COW (copy-on-write), which would imply it fragments files much more than Ext, but this is addressed in several aspects of the design, including the ability to easily defrag the filesystem while it is online. This excerpt provides more detail (emphasis mine):
Automatic defragmentation
COW (copy-on-write) filesystems have many advantages, but they also have some disadvantages, for example fragmentation. Btrfs lays out the data sequentially when files are written to the disk for first time, but a COW design implies that any subsequent modification to the file must not be written on top of the old data, but be placed in a free block, which will cause fragmentation (RPM databases are a common case of this problem). Additionally, it suffers the fragmentation problems common to all filesystems.
Btrfs already offers alternatives to fight this problem: First, it supports online defragmentation using the command btrfs filesystem defragment. Second, it has a mount option, -o nodatacow, that disables COW for data. Now btrfs adds a third option, the -o autodefrag mount option. This mechanism detects small random writes into files and queues them up for an automatic defrag process, so the filesystem will defragment itself while it's used. It isn't suited to virtualization or big database workloads yet, but works well for smaller files such as rpm, SQLite or bdb databases.
So, as long as you don't plan to run IO-intensive software like a database under significant load, you should be all good, as long as you mount your filesystems with the autodefrag option.
To check the fragmentation of files, you can use the filefrag utility:
$ find /path -type f -exec filefrag {} + >frag.list
# Now you can use your favourite tools to sort the data
On Systemd systems, /var/log/journal/ will probably be the most fragmented. You can also look at ~/.mozilla and other browsers databases.
I don't know apt-btrfs-snapshot, but from what I read rapidly in the code, it simply use the btrfs snapshot feature before apt's actions.
btrfs uses a lot of B-trees to hold data. Duplication is kept to minimum (not even using hardlink, but copy-on-write). Read: "it will use as much more-memory that you delete data outside /home".
EDIT:
After reading the code, apt-btrfs-snapshot have some problems. For example:
it makes big assumptions: your btrfs must have specific subvolume's name (your root subvolume must be named "@")
if your /home and /var/lib are on the same subvolume as your root /, they will also be snapshotted.
filefrag command works with btrfs as well as with other filesystems.
$ filefrag ubuntu-8.04.1-desktop-i386.iso
ubuntu-8.04.1-desktop-i386.iso: 7 extents found
It only shows the number of extents for individual files though. You may want to write a shell script to sample a number of files and calculate some number which will give an approximate indication of how the whole filesystem is fragmented.
Best Answer
You don't really need to defrag Btrfs filesystems manually.
Yes, Btrfs is COW (copy-on-write), which would imply it fragments files much more than Ext, but this is addressed in several aspects of the design, including the ability to easily defrag the filesystem while it is online. This excerpt provides more detail (emphasis mine):
So, as long as you don't plan to run IO-intensive software like a database under significant load, you should be all good, as long as you mount your filesystems with the
autodefrag
option.To check the fragmentation of files, you can use the filefrag utility:
On Systemd systems,
/var/log/journal/
will probably be the most fragmented. You can also look at~/.mozilla
and other browsers databases.To defragment, use: