I'm testing using btrfs on my embedded Linux system. It uses a uSD card for the rfs. This system is remotely deployed so there's no human sysadmin to take care of it. My question is what happens when the checkum fails for reading a file? Is there any way of automatically detecting this and sending a message back to my central server?
What actually happens when the checksum fails for a file using btrfs
btrfsflash-memorysd card
Related Solutions
From wiki:
Extent based file storage 2^64 byte == 16 EiB maximum file size Space-efficient packing of small files Space-efficient indexed directories Dynamic inode allocation Writable snapshots, read-only snapshots Subvolumes (separate internal filesystem roots) Checksums on data and metadata Compression (gzip and LZO) Integrated multiple device support RAID-0, RAID-1 and RAID-10 implementations Efficient incremental backup Background scrub process for finding and fixing errors on files with redundant copies Online filesystem defragmentation
Explanation for desktop users:
- Space-efficient packing of small files: Important for desktops with tens of thousands of files (maildirs, repos with code, etc).
- Dynamic inode allocation: Avoid the limits of Ext2/3/4 in numbers of inodes. Btrfs inode limits is in a whole different league (whereas ext4's inodes are allocated at filesystem creation time and cannot be resized after creation, typically at 1-2 million, with a hard limit of 4 billion, btrfs's inodes are dynamically allocated as needed, and the hard limit is 2^64, around 18.4 quintillion, which is around 4.6 billion times the hard limit of ext4).
- Read-only snapshots: fast backups.
- Checksums on data and metadata: essential for data integrity. Ext4 only has metadata integrity.
- Compression: LZO compression is very fast.
- Background scrub process to find and to fix errors on files with redundant copies: data integrity.
- Online filesystem defragmentation: autodefrag in 3.0 will defrag some types of files like databases (e.g. firefox profiles or akonadi storage).
I recommend you the kernel 3.0. Also btrfs is a good FS for SSD.
Answer to question 1 - How to start after one drive failing
I could restore the RAID 1 by doing the following steps:
I took a somehow formatted drive (say C) and plugged it to the same SATA port where the defective drive B was before.
After that I started the computer and in the boot menu I pressed
e
to edit the command before booting according to wiki.ubuntuusers.de by the following way:a. I scrolled to the relevant start entry and located the following rows:
set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 01234567-89ab-cdef-0123-456789abcdef else search --no-floppy --fs-uuid --set=root 01234567-89ab-cdef-0123-456789abcdef fi echo 'Loading Linux 3.14-2-amd64...' linux /boot/vmlinuz-3.14-2-amd64 root=UUID=01234567-89ab-cdef-0123-456789abcdef ro quiet
b. Then I edited row 1 and changed the drive number to the working hard disk (in my case it remains
hd0
, if multiple drives are still plugged in it might behd1
):set root='hd0,msdos1'
c. I deactivated rows 2 till 6 by making it a comment through adding a leading character
#
:#if [ x$feature_platform_search_hint = xy ]; then # search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1' 01234567-89ab-cdef-0123-456789abcdef #else # search --no-floppy --fs-uuid --set=root 01234567-89ab-cdef-0123-456789abcdef #fi
d. After that I edited row 8 and inserted a root flag for degradation of the RAID (
rootflags=degraded
):linux /boot/vmlinuz-3.14-2-amd64 root=UUID=01234567-89ab-cdef-0123-456789abcdef ro rootflags=degraded quiet
e. By pressing the key
F10
I selected the just edited entry. The system was starting.After booting the OS fully I had to add the new drive C to my RAID 1. I did it like mentioned on btrfs.wiki.kernel.org:
a. I mounted the still working drive A:
mount -o degraded /dev/sda1 /mnt
b. I added the new drive C:
btrfs device add /dev/sdb1 /mnt
c. After that I removed the old devices (in my case drive B):
btrfs device delete missing /mnt
Finally I checked if everything went well with the commands
btrfs filesystem show
,blkid
andbtrfs fi df /mnt
as mentioned above in the question. both drives are having the same UUID but different UUID_SUB and are reported being in mode RAID 1.Congratulations, it worked!
Personal note
I treat the described behaviour of a failing initramfs as expected until someone else proves me wrong. Maybe it's a way to tell me, I should react carefully now because my disk crashed horribly - but that's just guessing.
Explanation regarding to the need for manual degradation
In the meanwhile I found an interesting discussion related to that topic on the linux kernel developers mailing list. Because of it's relevance I want to cite a passage written by Duncan, which I think is really important to know, especially for new users:
You should be able to mount a two-device btrfs raid1 filesystem with only a single device with the degraded mount option, tho I believe current kernels refuse a read-write mount in that case, so you'll have read-only access until you btrfs device add a second device, so it can do normal raid1 mode once again. [...] Meanwhile, since the degraded mount-opt is in fact a no-op if btrfs can actually find all components of the filesystem, some people choose to simply add degraded to their standard mount options (edit the grub config to add it at every boot), so they don't have to worry about it. However, that is NOT RECOMMENDED, as the accepted wisdom is that the failure to mount undegraded serves as a warning to the sysadmin that something VERY WRONG is happening, and that they need to fix it. They can then add degraded temporarily if they wish, in ordered to get the filesystem to mount and thus be able to boot, but adding the option routinely at every boot bypasses this important warning, and it's all too likely that an admin will thus ignore the problem (or not know about it at all) until too late.
(Source: https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg31265.html)
Additional note: Although I have no swap partition(s) on the computer of my example I would like to encourage people, who are willed to have them, to read this very interesting mail I gave the link to, because it explains the usage of swap with BTRFS in RAID mode.
Answer to question 2 - How to make other drives bootable
As for what I know until now, using grub-install /dev/sdb
(and even an additional update-grub) seems to be not enough. I will explain why I think so.
When I tried the reverse way by offline-unplugging drive A and only booting with drive B the following happened. The bootloader GRUB appeared and I did the same steps like in point 2 of question 1. Right after confirming with F10
the boot process immediatly stopped with a blank screen (I am talking of an active monitor, black background, no cursor). So obviously something is wrong here with the bootloader on drive B. (Remember: I've got a RAID 1 and can't boot from my second drive after the first drive "failing".)
I helped myself by doing a hard reset, plugged in drive A again (so A and B both present again) and booted into the OS. Because my drives A and B are absolutly identical I copied the whole MBR (containing the bootloader) from the working drive A to B in raw mode with dd if=/dev/sda of=/dev/sdb bs=512 count=1
. I shutdown the computer, unplugged drive A like before and guess what happened? After performing the steps for degradation again I could finally manage to boot into the OS only from drive B.
I have to summarize that I still don't know whether this has to do with my partition table (MSDOS - not GPT) or the command grub-install
in combination with BTRFS or something else. I also don't get the dimension of potential drawbacks my raw copy has compared to a grub-install
. (Maybe someone could clarify this a bit in a comment underneath this answer.)
Please note, that I am still researching in that context and I will update this answer once again. I want to clear up more, but I need some more time working through the raw code of the MBR's sector layout of both drives and figure out whether the problem comes from the bootloader or even the disk identities.
Answer to question 3 - How to handle the mount option ssd
It depends on whether the mainboard is able to pass the drive situation correctly. As stated on btrfs.wiki.kernel.org BTRFS itself relies on values of the OS. Because of the fact that other modules in the OS may also depend on these values it is much better to check /sys/block/sdX/queue/rotational
for its appropriate value (0: SSD, 1: HDD) in general. If the values fit leave the ssd option.
Best Answer
Btrfs uses crc32c checksums to check the integrity of blocks. If the checksum doesn't match the block when it's read then an alternative block is read. This is assuming there is an alternative (RAID1). If that block also fails or if there is no alternative an EIO (error input/output) is returned.
I do not know of any way to automatically detecting errors, but all errors are logged to syslog. Try
dmesg | grep btrfs
. You should be looking for something like this:You could probably make a script or a that looks through the logs and notifies you of errors at regular intervals. Or you could filter these log entries and trigger an rsyslog action.