Linux – Arch does not mount btrfs array on boot

arch linuxbtrfssystemd

As data partition I have a multi-disk btrfs filesystem. My root disk is ext4

Fstab:

UUID=290624c6-6b95-41fd-94a1-923ebca64b83   /           ext4        rw,relatime,data=ordered    0 1
/dev/sdc    /mnt/btrfs  btrfs   rw,relatime,compress-force=zlib,autodefrag  0   0

When I boot the machine, it will wait for 1m30s with the message

A start job is running for dev-sdc.device

And after that

Dependency failed for /mnt/btrfs

When I do log in I can do

mount /mnt/btrfs
systemctl default

And it will work. The system will boot normally.

I first though I might need to do something with a mkinitcpio hook, but this page says:

Arch's default mkinitcpio package contains a standard btrfs hook, which is enough to get multi-device (RAID) support. Beside that, the kernel is capable of booting a single-device btrfs root without any hook

So everything should work out of the box.

Why doesn't it work, and what should I do to fix it?

Best Answer

Two comments. First, try to mount by Label or UUID instead of device. Device names can sometimes change.

Otherwise, btrfs requires brtfs device scan call before it knows about btrfs filesystems on your machine. I expected arch to handle this but somehow it didn't work until I created a service file for this and put it in /etc/systemd/system/local-fs-pre.target.wants/btrfs-dev-scan.service:

[Unit]
Description=Btrfs scan devices
Before=local-fs-pre.target
DefaultDependencies=false

[Service]
Type=oneshot
ExecStart=/usr/bin/btrfs device scan

[Install]
WantedBy=local-fs-pre.target

DefaultDependencies=false is necessary, otherwise it fucks up the boot. (Non-Arch users may have btrfs located in /sbin instead of /usr/bin)

This is what should be handled by btrfs hook (I figured that a bit later), but still, it is possible that there is a problem with that.

However you may have some other problem. That Dependency failed suggests that some earlier required service didn't start. I have no idea what that could be, you should check your journalctl -b and search for dependency complaints, it usually states what exactly is missing. Or at least, you get a chain of dependencies that failed - it is possible that dependency failures propagate...

You can also generate systemd-analyze plot > boot.svg and check the sequence what exactly booted in what order. You can guess from that what exactly went wrong - who was waiting for whom? And, what does systemctl --failed say?

Related Question