Btrfs – BTRFS Balance Completed but Data Still in Single Mode

btrfsfilesystemsraid

I have three drives (8TB, 4TB, 3TB). Originally, I created a btrfs partition on the 8TB drive and copied all my data there. I added the 4TB and 3TB drives by using btrfs device add, and then then ran a balance conversion:

btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt

Now the balance is done, but it still shows some data in "single" and "DUP" mode on the original drive. Here's the output from btrfs fi usage /mnt/btrfs:

Overall:
    Device size:          13.37TiB
    Device allocated:          4.62TiB
    Device unallocated:        8.75TiB
    Device missing:          0.00B
    Used:              4.60TiB
    Free (estimated):          4.98TiB  (min: 4.38TiB)
    Data ratio:               1.76
    Metadata ratio:           2.00
    Global reserve:      512.00MiB  (used: 0.00B)

Data,single: Size:645.00GiB, Used:645.00GiB
   /dev/mapper/8TB   645.00GiB

Data,RAID1: Size:1.98TiB, Used:1.98TiB
   /dev/mapper/3TB   551.00GiB
   /dev/mapper/4TB     1.44TiB
   /dev/mapper/8TB     1.98TiB

Metadata,RAID1: Size:8.00GiB, Used:3.84GiB
   /dev/mapper/4TB     8.00GiB
   /dev/mapper/8TB     8.00GiB

Metadata,DUP: Size:7.00GiB, Used:6.41GiB
   /dev/mapper/8TB    14.00GiB

System,DUP: Size:8.00MiB, Used:400.00KiB
   /dev/mapper/8TB    16.00MiB

Unallocated:
   /dev/mapper/3TB     2.19TiB
   /dev/mapper/4TB     2.19TiB
   /dev/mapper/Seagate_Archive_8TB-btrfs       4.37TiB

Questions:

  1. Is there any data which is not stored on more than one disk? So, in other words, is there any data which would be lost if one disk failed? If so, how can I coerce this lingering "single" stored data into RAID1?
  2. Assuming the "single" and "DUP" data stores are unnecessary, now that everything has been converted to raid, is there any way to clear them up?

Edit: here is some system info:

uname -a 
Linux 4.8.0-0.bpo.2-amd64 #1 SMP Debian 4.8.11-1~bpo8+1 (2016-12-14) x86_64 GNU/Linux
btrfs --version
btrfs-progs v4.9

I also should mention that this computer was restarted during the balance, and when it came back up I wasn't able to get the btrfs volume to mount at all (it would just hang). I tried with a number of different mount parameters (skip-param, recovery) and the only one which would work was to mount it read only (using -o ro). After much frustration, I booted wth an Antergos live USB, which had the newest kernel and btrfs progs on it, and it mounted without a problem. I paused the balance operation which had automatically been started and then booted back into Debian, and it mounted without an issue, so I resumed the balance again.

Best Answer

With the help of a user on the btrfs irc, I was able to answer question (1). It seems unrelated to the reboot and unsuccessful mount attempt (still not sure what that was about). Instead, it seems the 645GB of data stored as "single" was data that was added to the btrfs volume after the raid1 conversion had been initiated. Therefore, it seems good practice to check the output of btrfs fi usage before you assume all your data is stored as raid1 after a conversion. Furthermore, the "soft" filter will allow you to rebalance data which is not being stored according to the target profile, so for example I ran:

btrfs balance start --bg -mconvert=raid1,soft /mnt/btrfs
btrfs balance start --bg -dconvert=raid1,soft /mnt/btrfs

(following a suggestion by the user on the btrfs irc forum to perform the balance on the metadata first, and then on the data) and this is in the process of converting the remaining data to raid1.

Furthermore, to answer question (2), the answer is that it is possible to end up with some "single" chunks in a raid1 filesystem, but they should have 0 usage. If this happens, you can clean them up by running

btrfs balance start -dusage=0 -musage=0 /mnt/btrfs

(see the btrfs FAQ)

Related Question