I created a single-device btrfs filesystem. According to the btrfs wiki article on using multiple devices, I should be able to convert that to RAID1 using:
btrfs balance start -dconvert=raid1 -mconvert=raid1 /path
I started that on Linux 3.16, and it failed (kernel panic). Upgraded to Linux 4.0, after mounting the filesystem it continued and finished. But it only did the data, not the metadata or system (according to btrfs fi df
). I grabbed the latest btrfs-progs from git (just to make sure it wasn't due to an old version), and did:
Watt:/home/anthony/src/btrfs-progs# ./btrfs balance start -v -mconvert=raid1 /path
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x100): converting, target=16, soft is off
SYSTEM (flags 0x100): converting, target=16, soft is off
Done, had to relocate 6 out of 1411 chunks
But that didn't actually mirror it, apparently. Right now, I have:
Watt:/home/anthony/src/btrfs-progs# ./btrfs fi usage /path
Overall:
Device size: 7.28TiB
Device allocated: 2.75TiB
Device unallocated: 4.53TiB
Device missing: 0.00B
Used: 2.74TiB
Free (estimated): 2.26TiB (min: 2.26TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:1.37TiB, Used:1.37TiB
/dev/mapper/luks-562e4e2f-2894-415a-aaf1-7c94a11c33b9 1.37TiB
/dev/mapper/luks-ec97c1ad-21d8-41bb-9072-e5a74f68e416 1.37TiB
Metadata,DUP: Size:2.50GiB, Used:1.58GiB
/dev/mapper/luks-562e4e2f-2894-415a-aaf1-7c94a11c33b9 5.00GiB
System,DUP: Size:32.00MiB, Used:224.00KiB
/dev/mapper/luks-562e4e2f-2894-415a-aaf1-7c94a11c33b9 64.00MiB
Unallocated:
/dev/mapper/luks-562e4e2f-2894-415a-aaf1-7c94a11c33b9 3.17TiB
/dev/mapper/luks-ec97c1ad-21d8-41bb-9072-e5a74f68e416 1.36TiB
I tried the full balance again (with both -dconvert=raid1
and -mconvert=raid1
) and that didn't do it, either.
NOTE: The larger disk (56…b9) is the one I added.
How can I get the metadata and system mirrored?
Best Answer
This is a regression in kernel 4.0, causing conversion filters in balance to have no effect; it looks like all conversions are affected (not just single->raid1 or raid1->raid5). See a recent mailing list thread, where there's currently no official fix. If you're up to patching your kernel, there's an easy patch to apply as a temporary fix.