fdisk
is the wrong tool for disks >2TB. Use parted
or gdisk
instead.
It appears that /dev/sdc1
and /dev/sdd1
are 2TB partitions, so that's what limits your array size. For the other disks, they have GPT so I assume they are 3TB already, but you should check.
Basically you have to stop the array, enlarge each partition to 3TB (without changing the starting offset), then start it again and follow it up with a grow:
mdadm --grow /dev/md0 --size=max
If you can't stop the array, you'll have to fail each 2TB partition individually, repartition and re-add it. This might go faster if you add a write-intent bitmap first.
mdadm --grow /dev/md0 --bitmap=internal
Then for each disk individually,
mdadm /dev/md0 --fail /dev/disk1 # check mdstat for [UUUU] first
mdadm /dev/md0 --remove /dev/disk1
parted /dev/disk -- mklabel gpt mkpart primary 1mib -1mib
mdadm /dev/md0 --re-add /dev/disk1
mdadm --wait /dev/md0 # must wait for sync
Once that's done you can remove the bitmap again (keeping it may harm performance).
mdadm --grow /dev/md0 --bitmap=none
mdadm --grow /dev/md0 --size=max
Finally do your resize2fs
or whatever.
Answer to question 1 - How to start after one drive failing
I could restore the RAID 1 by doing the following steps:
I took a somehow formatted drive (say C) and plugged it to the same SATA port where the defective drive B was before.
After that I started the computer and in the boot menu I pressed e
to edit the command before booting according to wiki.ubuntuusers.de by the following way:
a. I scrolled to the relevant start entry and located the following rows:
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 01234567-89ab-cdef-0123-456789abcdef
else
search --no-floppy --fs-uuid --set=root 01234567-89ab-cdef-0123-456789abcdef
fi
echo 'Loading Linux 3.14-2-amd64...'
linux /boot/vmlinuz-3.14-2-amd64 root=UUID=01234567-89ab-cdef-0123-456789abcdef ro quiet
b. Then I edited row 1 and changed the drive number to the working hard disk (in my case it remains hd0
, if multiple drives are still plugged in it might be hd1
):
set root='hd0,msdos1'
c. I deactivated rows 2 till 6 by making it a comment through adding a leading character #
:
#if [ x$feature_platform_search_hint = xy ]; then
# search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 01234567-89ab-cdef-0123-456789abcdef
#else
# search --no-floppy --fs-uuid --set=root 01234567-89ab-cdef-0123-456789abcdef
#fi
d. After that I edited row 8 and inserted a root flag for degradation of the RAID (rootflags=degraded
):
linux /boot/vmlinuz-3.14-2-amd64 root=UUID=01234567-89ab-cdef-0123-456789abcdef ro rootflags=degraded quiet
e. By pressing the key F10
I selected the just edited entry. The system was starting.
After booting the OS fully I had to add the new drive C to my RAID 1. I did it like mentioned on btrfs.wiki.kernel.org:
a. I mounted the still working drive A:
mount -o degraded /dev/sda1 /mnt
b. I added the new drive C:
btrfs device add /dev/sdb1 /mnt
c. After that I removed the old devices (in my case drive B):
btrfs device delete missing /mnt
Finally I checked if everything went well with the commands btrfs filesystem show
, blkid
and btrfs fi df /mnt
as mentioned above in the question. both drives are having the same UUID but different UUID_SUB and are reported being in mode RAID 1.
Congratulations, it worked!
Personal note
I treat the described behaviour of a failing initramfs as expected until someone else proves me wrong. Maybe it's a way to tell me, I should react carefully now because my disk crashed horribly - but that's just guessing.
Explanation regarding to the need for manual degradation
In the meanwhile I found an interesting discussion related to that topic on the linux kernel developers mailing list. Because of it's relevance I want to cite a passage written by Duncan, which I think is really important to know, especially for new users:
You should be able to mount a two-device btrfs raid1 filesystem with only a single device with the degraded mount option, tho I believe current kernels refuse a read-write mount in that case, so you'll have read-only access until you btrfs device add a second device, so it can do normal raid1 mode once again. [...] Meanwhile, since the degraded mount-opt is in fact a no-op if btrfs can actually find all components of the filesystem, some people choose to simply add degraded to their standard mount options (edit the grub config to add it at every boot), so they don't have to worry about it. However, that is NOT RECOMMENDED, as the accepted wisdom is that the failure to mount undegraded serves as a warning to the sysadmin that something VERY WRONG is happening, and that they need to fix it. They can then add degraded temporarily if they wish, in ordered to get the filesystem to mount and thus be able to boot, but adding the option routinely at every boot bypasses this important warning, and it's all too likely that an admin will thus ignore the problem (or not know about it at all) until too late.
(Source: https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg31265.html)
Additional note: Although I have no swap partition(s) on the computer of my example I would like to encourage people, who are willed to have them, to read this very interesting mail I gave the link to, because it explains the usage of swap with BTRFS in RAID mode.
Answer to question 2 - How to make other drives bootable
As for what I know until now, using grub-install /dev/sdb
(and even an additional update-grub) seems to be not enough. I will explain why I think so.
When I tried the reverse way by offline-unplugging drive A and only booting with drive B the following happened. The bootloader GRUB appeared and I did the same steps like in point 2 of question 1. Right after confirming with F10
the boot process immediatly stopped with a blank screen (I am talking of an active monitor, black background, no cursor). So obviously something is wrong here with the bootloader on drive B. (Remember: I've got a RAID 1 and can't boot from my second drive after the first drive "failing".)
I helped myself by doing a hard reset, plugged in drive A again (so A and B both present again) and booted into the OS. Because my drives A and B are absolutly identical I copied the whole MBR (containing the bootloader) from the working drive A to B in raw mode with dd if=/dev/sda of=/dev/sdb bs=512 count=1
. I shutdown the computer, unplugged drive A like before and guess what happened? After performing the steps for degradation again I could finally manage to boot into the OS only from drive B.
I have to summarize that I still don't know whether this has to do with my partition table (MSDOS - not GPT) or the command grub-install
in combination with BTRFS or something else. I also don't get the dimension of potential drawbacks my raw copy has compared to a grub-install
. (Maybe someone could clarify this a bit in a comment underneath this answer.)
Please note, that I am still researching in that context and I will update this answer once again. I want to clear up more, but I need some more time working through the raw code of the MBR's sector layout of both drives and figure out whether the problem comes from the bootloader or even the disk identities.
Answer to question 3 - How to handle the mount option ssd
It depends on whether the mainboard is able to pass the drive situation correctly. As stated on btrfs.wiki.kernel.org BTRFS itself relies on values of the OS. Because of the fact that other modules in the OS may also depend on these values it is much better to check /sys/block/sdX/queue/rotational
for its appropriate value (0: SSD, 1: HDD) in general. If the values fit leave the ssd option.
Best Answer
Just adding a device to a BTRFS pool doesn't automatically move any data to it, you have to write new data to the pool and the balancer will decide which device it puts the data on.
The next chunk is very likely to be created on a newly added device though since it's 0% allocated (the balancer tries to fill up all devices equally).
If you want to make already written data go through the balancer again, you need to use the
btrfs balance
command.All data that is put on the SSD will be faster than the data on the HDDs but there is no speed aware balancing in place, so which data is fast and which isn't is pretty much random and cannot be manually controlled either.