You don't really need to defrag Btrfs filesystems manually.
Yes, Btrfs is COW (copy-on-write), which would imply it fragments files much more than Ext, but this is addressed in several aspects of the design, including the ability to easily defrag the filesystem while it is online. This excerpt provides more detail (emphasis mine):
Automatic defragmentation
COW (copy-on-write) filesystems have many advantages, but they also have some disadvantages, for example fragmentation. Btrfs lays out the data sequentially when files are written to the disk for first time, but a COW design implies that any subsequent modification to the file must not be written on top of the old data, but be placed in a free block, which will cause fragmentation (RPM databases are a common case of this problem). Additionally, it suffers the fragmentation problems common to all filesystems.
Btrfs already offers alternatives to fight this problem: First, it supports online defragmentation using the command btrfs filesystem defragment
. Second, it has a mount option, -o nodatacow
, that disables COW for data. Now btrfs adds a third option, the -o autodefrag
mount option. This mechanism detects small random writes into files and queues them up for an automatic defrag process, so the filesystem will defragment itself while it's used. It isn't suited to virtualization or big database workloads yet, but works well for smaller files such as rpm, SQLite or bdb databases.
So, as long as you don't plan to run IO-intensive software like a database under significant load, you should be all good, as long as you mount your filesystems with the autodefrag
option.
To check the fragmentation of files, you can use the filefrag utility:
$ find /path -type f -exec filefrag {} + >frag.list
# Now you can use your favourite tools to sort the data
On Systemd systems, /var/log/journal/
will probably be the most fragmented. You can also look at ~/.mozilla
and other browsers databases.
To defragment, use:
$ sudo btrfs fi defrag -r /path
The snapshot exists in the real root of the filesystem, which is not what you have mounted in /. You have the /@ subvolume mounted in /, so there is no such file with that name. You have to mount the real root volume somewhere and use that path to reference the snapshot.
Or you can use apt-btrfs-snapshot delete
instead.
Best Answer
The answer to your either/or question is "both". Yes, you'll have to mount each subvolume. Each subvolume behaves like a normal file system, so they will appear in mount points like /etc.
There are a few advantages to that idea. For instance, you could make your MySQL database directory into a subvolume, which would enable you to take snapshots for use with backups. You could also choose to make that directory into a RAID1, so that if one disk failed, your database would still be intact. Another is using a subvolume for /etc so that you could always reverse any kind of system wide configuration changes. Using a subvolume for /home/username would potentially allow each user to have a time machine, though probably in a much more flexible way than what Apple provides in their system.
And of course, one benefit of having a subvolume for homes and another for the root is the ability to reverse an upgrade. For instance, you upgrade from 12.04 to 12.10 very early, discover that it's a tad too buggy after the first month, so you just un-upgrade your operating system. I haven't tried that myself, but it should work just as good as keeping your home and reinstalling the previous system, except it would take about a second insted of an hour. :)