Yes grub2 is fully raid ( and LVM ) aware. In fact you do not need a separate /boot partition at all; you can just put everything on the raid5.
Ideally you want to not install with a /boot partition at all, but removing it after the fact simply means copying all of the files to the root partition, and reinstalling grub, like this:
umount /boot
mount /dev/[bootpart] /mnt
cp -ax /mnt/* /boot
grub-install /dev/sda
Of course you then need to remove the /boot line from /etc/fstab, and you still have the partition laying around, just unused.
Note you can also grub-install to all of the drives in the raid5 so that you can boot from any of them. The Ubuntu grub-pc package will prompt you ( dpkg-reconfigure grub-pc
to get it to ask again ) to check off all of the drives you want it installed on and install it for you.
Answer to question 1 - How to start after one drive failing
I could restore the RAID 1 by doing the following steps:
I took a somehow formatted drive (say C) and plugged it to the same SATA port where the defective drive B was before.
After that I started the computer and in the boot menu I pressed e
to edit the command before booting according to wiki.ubuntuusers.de by the following way:
a. I scrolled to the relevant start entry and located the following rows:
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 01234567-89ab-cdef-0123-456789abcdef
else
search --no-floppy --fs-uuid --set=root 01234567-89ab-cdef-0123-456789abcdef
fi
echo 'Loading Linux 3.14-2-amd64...'
linux /boot/vmlinuz-3.14-2-amd64 root=UUID=01234567-89ab-cdef-0123-456789abcdef ro quiet
b. Then I edited row 1 and changed the drive number to the working hard disk (in my case it remains hd0
, if multiple drives are still plugged in it might be hd1
):
set root='hd0,msdos1'
c. I deactivated rows 2 till 6 by making it a comment through adding a leading character #
:
#if [ x$feature_platform_search_hint = xy ]; then
# search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 01234567-89ab-cdef-0123-456789abcdef
#else
# search --no-floppy --fs-uuid --set=root 01234567-89ab-cdef-0123-456789abcdef
#fi
d. After that I edited row 8 and inserted a root flag for degradation of the RAID (rootflags=degraded
):
linux /boot/vmlinuz-3.14-2-amd64 root=UUID=01234567-89ab-cdef-0123-456789abcdef ro rootflags=degraded quiet
e. By pressing the key F10
I selected the just edited entry. The system was starting.
After booting the OS fully I had to add the new drive C to my RAID 1. I did it like mentioned on btrfs.wiki.kernel.org:
a. I mounted the still working drive A:
mount -o degraded /dev/sda1 /mnt
b. I added the new drive C:
btrfs device add /dev/sdb1 /mnt
c. After that I removed the old devices (in my case drive B):
btrfs device delete missing /mnt
Finally I checked if everything went well with the commands btrfs filesystem show
, blkid
and btrfs fi df /mnt
as mentioned above in the question. both drives are having the same UUID but different UUID_SUB and are reported being in mode RAID 1.
Congratulations, it worked!
Personal note
I treat the described behaviour of a failing initramfs as expected until someone else proves me wrong. Maybe it's a way to tell me, I should react carefully now because my disk crashed horribly - but that's just guessing.
Explanation regarding to the need for manual degradation
In the meanwhile I found an interesting discussion related to that topic on the linux kernel developers mailing list. Because of it's relevance I want to cite a passage written by Duncan, which I think is really important to know, especially for new users:
You should be able to mount a two-device btrfs raid1 filesystem with only a single device with the degraded mount option, tho I believe current kernels refuse a read-write mount in that case, so you'll have read-only access until you btrfs device add a second device, so it can do normal raid1 mode once again. [...] Meanwhile, since the degraded mount-opt is in fact a no-op if btrfs can actually find all components of the filesystem, some people choose to simply add degraded to their standard mount options (edit the grub config to add it at every boot), so they don't have to worry about it. However, that is NOT RECOMMENDED, as the accepted wisdom is that the failure to mount undegraded serves as a warning to the sysadmin that something VERY WRONG is happening, and that they need to fix it. They can then add degraded temporarily if they wish, in ordered to get the filesystem to mount and thus be able to boot, but adding the option routinely at every boot bypasses this important warning, and it's all too likely that an admin will thus ignore the problem (or not know about it at all) until too late.
(Source: https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg31265.html)
Additional note: Although I have no swap partition(s) on the computer of my example I would like to encourage people, who are willed to have them, to read this very interesting mail I gave the link to, because it explains the usage of swap with BTRFS in RAID mode.
Answer to question 2 - How to make other drives bootable
As for what I know until now, using grub-install /dev/sdb
(and even an additional update-grub) seems to be not enough. I will explain why I think so.
When I tried the reverse way by offline-unplugging drive A and only booting with drive B the following happened. The bootloader GRUB appeared and I did the same steps like in point 2 of question 1. Right after confirming with F10
the boot process immediatly stopped with a blank screen (I am talking of an active monitor, black background, no cursor). So obviously something is wrong here with the bootloader on drive B. (Remember: I've got a RAID 1 and can't boot from my second drive after the first drive "failing".)
I helped myself by doing a hard reset, plugged in drive A again (so A and B both present again) and booted into the OS. Because my drives A and B are absolutly identical I copied the whole MBR (containing the bootloader) from the working drive A to B in raw mode with dd if=/dev/sda of=/dev/sdb bs=512 count=1
. I shutdown the computer, unplugged drive A like before and guess what happened? After performing the steps for degradation again I could finally manage to boot into the OS only from drive B.
I have to summarize that I still don't know whether this has to do with my partition table (MSDOS - not GPT) or the command grub-install
in combination with BTRFS or something else. I also don't get the dimension of potential drawbacks my raw copy has compared to a grub-install
. (Maybe someone could clarify this a bit in a comment underneath this answer.)
Please note, that I am still researching in that context and I will update this answer once again. I want to clear up more, but I need some more time working through the raw code of the MBR's sector layout of both drives and figure out whether the problem comes from the bootloader or even the disk identities.
Answer to question 3 - How to handle the mount option ssd
It depends on whether the mainboard is able to pass the drive situation correctly. As stated on btrfs.wiki.kernel.org BTRFS itself relies on values of the OS. Because of the fact that other modules in the OS may also depend on these values it is much better to check /sys/block/sdX/queue/rotational
for its appropriate value (0: SSD, 1: HDD) in general. If the values fit leave the ssd option.
Best Answer
No. Neither the RAID-5/6 code, nor the self-healing code is fully functional. The self-healing code sometimes works and sometimes doesn't. If you use BTRFS RAID-5/6, the question isn't "Will my file-system die?", it's "When will my file-system die?".
BTRFS is claimed to be "Production" by the authors, and it is used as such by a few people, but please remember that RedHat has removed BTRFS from their future releases and made XFS their default file-system.
I've been reading the BTRFS mailing list for eight years. One of the core initial problems they had (out of space when there's LOTS of space actually available) is still there. Also, they just posted information about missing/broken code for various drive failure/replacement activities, and it's not pretty.
While ZFS isn't perfect, the latest release is quite robust.
I would recommend RAID-Z2 if you can accept the performance. Otherwise, a pair of mirrors. The time to resilver (ZFS version of reconstruct) a 4TB drive is just too long without some redundancy.