Btrfs RAID1 – Adding a Smaller Drive to Btrfs RAID1

btrfsraidraid1software-raid

I have a RAID1 btrfs filesystem with 2 2Tb drives, and I had a spare 750 Gb hdd lying around, so I thought I would add it to the RAID so I could use some extra storage.

Well, I added it to the volume, and the amount of available free space increased as predicted, by half the amount of the newly added hdd. I did a btrfs balance /hdd and now the output to btrfs filesystem show is:

Label: none  uuid: e100a7bd-1c03-4424-9ab2-4aa9fa679b8c
    Total devices 3 FS bytes used 496.82GiB
    devid    1 size 1.82TiB used 500.03GiB path /dev/sda1
    devid    2 size 1.82TiB used 500.03GiB path /dev/sdd1
    devid    3 size 698.64GiB used 0.00B path /dev/sdc

The relevant line from df -h:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       2,2T  498G   1,4T  27% /hdd

Is it normal that the new drive is empty even after the rebalance?
Do I need to do something else? Am I doing something wrong?

I'm using Netrunner Rolling if that's relevant.

UPDATE:
So one of my 2 Tb drives died, so i added a 250 Gb and a 1 Tb drives to the filesystem, and did a balance. Here's the current situation:

Label: 'dades'  uuid: e100a7bd-1c03-4424-9ab2-4aa9fa679b8c
    Total devices 4 FS bytes used 589.10GiB
    devid    1 size 1.82TiB used 592.03GiB path /dev/sdb1
    devid    3 size 698.64GiB used 180.00GiB path /dev/sdd
    devid    4 size 232.89GiB used 0.00B path /dev/sda
    devid    5 size 931.51GiB used 412.03GiB path /dev/sde

df -h

Filesystem      Size  Used Avail Use% Mounted on    
/dev/sdb1       1,9T  590G   755G  44% /hdd

Best Answer

This question is three years old, but appears to have never been answered. I stumbled on this question in solving my own similar problem. It would have been useful to me at the time if it had a proper answer.

In your case, this appears to be by design. The "problem" you experienced (both before and after drive failure and replacement), is that the other, existing disks in the array have more free space than the newly added one[s]. So Btrfs is going to write to them first, even if everything about the configuration works as you expect. Once the array reaches the point where the new device has more free space, then it will be written to for one of the pairs of redundant blocks. (With the next largest free space being used for the second copy.)

You can force a full rebalance of the entire array, forcing the new device to receive blocks, via

sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /mountpoint

Don't worry about "converting" your raid1 to raid1. According to my experience at least (but doesn't seem to be officially documented anywhere), it will just re-do everything, including a full raid1 rebalance to all disks, roughly proportional to their size.

And although you've surely figured this all out by now, for people finding this question in a search, in this particular case I would recommend just not doing anything. The new device should be used once things start filling up. Unfortunately, "should" and "will" don't always align with Btrfs. If that doesn't happen, try command noted above. If that doesn't work, try this answer.

Related Question