Btrfs Disk Usage – Ubuntu Thinks Btrfs Disk is Full but It’s Not

btrfsdisk-usage

$ cat /etc/fstab
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
UUID=a168d1ac-4e13-4643-976d-6e47ea1732b1 /boot        ext2  defaults                                                                   0 1
/dev/mapper/sda4_crypt                    /            btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@          0 2
/dev/mapper/sda4_crypt                    /tmp         btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@tmp       0 2
/dev/mapper/sda4_crypt                    /run         btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@run       0 2
/dev/mapper/sda4_crypt                    /var/crash   btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@var-crash 0 2
/dev/mapper/sda4_crypt                    /var/tmp     btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@var-tmp   0 2
/dev/mapper/sda4_crypt                    /var/log     btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@var-log   0 2
/dev/mapper/sda4_crypt                    /var/spool   btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@var-spool 0 2
/dev/mapper/sda5_crypt                    /home        btrfs defaults,autodefrag,compress=lzo,inode_cache,space_cache,subvol=@home      0 3
/dev/mapper/750er                         /media/750er ext4  defaults                                                                   0 4
/dev/mapper/cswap                         none         swap  defaults                                                                   0 5
➜  ~  df -h         
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/sda4_crypt   38G   12G   13M 100% /
none                    4,0K     0  4,0K   0% /sys/fs/cgroup
udev                    2,0G  4,0K  2,0G   1% /dev
tmpfs                   396M  1,3M  394M   1% /run
none                    5,0M     0  5,0M   0% /run/lock
none                    2,0G  208K  2,0G   1% /run/shm
none                    100M   36K  100M   1% /run/user
/dev/mapper/sda4_crypt   38G   12G   13M 100% /tmp
/dev/sda2               231M   44M  175M  21% /boot
/dev/mapper/sda4_crypt   38G   12G   13M 100% /var/crash
/dev/mapper/sda4_crypt   38G   12G   13M 100% /var/tmp
/dev/mapper/sda4_crypt   38G   12G   13M 100% /var/log
/dev/mapper/sda4_crypt   38G   12G   13M 100% /var/spool
/dev/mapper/sda5_crypt  3,7T  2,4T  1,2T  67% /home
/dev/mapper/750er       688G  276G  377G  43% /media/750er
/dev/mapper/2tb         1,8T  1,7T  141G  93% /media/2tb
➜  ~  sudo btrfs fi df /
Data, single: total=9.47GiB, used=9.46GiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=13.88GiB, used=1.13GiB
Metadata, single: total=8.00MiB, used=0.00
➜  ~  

It's a 40GB partition with quite some snapshots on it. But it's compressed so I think the 9,46GB/40GB is accurate. But my Ubuntu fails because it says it has no disk space. I had apt-errors, cant install programs and my mysql server failed to start because of it.

And I know about not to rely on df I just included it for completeness.

I think Ubuntu uses df which is known to report wrongly with Btrfs internally and fails for that reason. That would make sense for APT when it does checks for space. But it actually fails to write to disk.

$ sudo time dd if=/dev/zero of=large bs=2G count=1
dd: error writing ‘large’: No space left on device
0+1 records in
0+0 records out
11747328 bytes (12 MB) copied, 1,29706 s, 9,1 MB/s
Command exited with non-zero status 1
0.00user 1.40system 0:01.44elapsed 97%CPU (0avgtext+0avgdata 2098028maxresident)k
160inputs+23104outputs (0major+383008minor)pagefaults 0swaps

Best Answer

Btrfs is different from traditional filesystems. It is not just a layer that translates filenames into offsets on a block device, it is more of a layer that combines a traditional filesystem with LVM and RAID. And like LVM, it has the concept of allocating space on the underlying device, but not actually using it for files.

A traditional filesystem is divided into files and free space. It is easy to calculate how much space is used or free:

|--------files--------|                                                |
|------------------------drive partition-------------------------------|

Btrfs combines LVM, RAID and a filesystem. The drive is divided into subvolumes, each dynamically sized and replicated:

|--files--|    |--files--|         |files|         |                   |
|----@raid1----|------@raid1-------|-----@home-----|metadata|          |
|------------------------drive partition-------------------------------|

The diagram shows the partition being divided into two subvolumes and metadata. One of the subvolumes is duplicated (RAID1), so there are two copies of every file on the device. Now we not only have the concept of how much space is free at the filesystem layer, but also how much space is free at the block layer (drive partition) below it. Space is also taken up by metadata.

When considering free space in Btrfs, we have to clarify which free space we are talking about - the block layer, or the file layer? At the block layer, data is allocated in 1GB chunks, so the values are quite coarse, and might not bear any relation to the amount of space that the user can actually use. At the file layer, it is impossible to report the amount of free space because the amount of space depends on how it is used. In the above example, a file stored on the replicated subvolume @raid1 will take up twice as much space as the same file stored on the @home subvolume. Snapshots only store copies of files that have been subsequently modified. There is no longer a 1-1 mapping between a file as the user sees it, and a file as stored on the drive.

You can check the free space at the block layer with btrfs filesystem show / and the free space at the subvolume layer with btrfs filesystem df /


# df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/sda4_crypt   38G   12G   13M 100% /

For this mounted subvolume, df reports a drive of total size 38G, with 12G used, and 13M free. 100% of the available space has been used. Remember that the total size 38G is divided between different subvolumes and metadata - it is not exclusive to this subvolume.

# btrfs filesystem df /
Data, single: total=9.47GiB, used=9.46GiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=13.88GiB, used=1.13GiB
Metadata, single: total=8.00MiB, used=0.00

Each line shows the total space and the used space for a different data type and replication type. The values shown are data stored rather than raw bytes on the drive, so if you're using RAID-1 or RAID-10 subvolumes, the amount of raw storage used is double the values you can see here.

The first column shows the type of item being stored (Data, System, Metadata). The second column shows whether a single copy of each item is stored (single), or whether two copies of each item are stored (DUP). Two copies are used for sensitive data, so there is a backup if one copy is corrupted. For DUP lines, the used value has to be doubled to get the amount of space used on the actual drive (because btrfs fs df reports data stored, not drive space used). The third and fourth columns show the total and used space. There is no free column, since the amount of "free space" is dependent on how it is used.

The thing that stands out about this drive is that you have 9.47GiB of space allocated for ordinary files of which you have used 9.46GiB - this is why you are getting No space left on device errors. You have 13.88GiB of space allocated for duplicated metadata, of which you have used 1.13GiB. Since this metadata is DUP duplicated, it means that 27.76GiB of space has been allocated on the actual drive, of which you have used 2.26GiB. Hence 25.5GiB of the drive is not being used, but at the same time is not available for files to be stored in. This is the "Btrfs huge metadata allocated" problem. To try and correct this, run btrfs balance start -m /. The -m parameter tells btrfs to only re-balance metadata.

A similar problem is running out of metadata space. If the output had shown that the metadata were actually full (used value close to total), then the solution would be to try and free up almost empty (<5% used) data blocks using the command btrfs balance start -dusage=5 /. These free blocks could then be reused to store metadata.

For more details see the Btrfs FAQs:

Related Question