Mdadm: no devices listed in conf file were found – Debian 8 with GPT

bootgptinitramfsmdadm

I have a Debian Jessie (3.16.7-ckt20-1+deb8u3) system with RAID1 on 2x 3TB hard drives. Grub can't be installed into MBR on drives >2TB, thus I have GPT with 1MB bios partition:

Device          Start        End    Sectors   Size Type
/dev/sda1        2048       4095       2048     1M BIOS boot
/dev/sda2        4096 1953128447 1953124352 931.3G Linux RAID
/dev/sda3  1953128448 5860532223 3907403776   1.8T Linux RAID

After rebooting (kernel upgraded deb8u2 -> deb8u3) system ended up in initramfs rescue:

Loading, please wait...
mdadm: No device listed in conf file were found.   

Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
  - Check rootdelay= (did the system wait long enough?)
  - Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/disk/by-uuid/5887d2e0-bae1-4ce8-ac6f-168fb183d7b0 does not exist.
Dropping to a shell!
modprobe: module ehci-orion not found in modules.dep

BusyBox v1.22.1 (Debian 1:1.22.0-9+deb8u1) built-in shell (ash)
Enter 'help' for a list of built-in commands.

/bin/sh: can't access tty; job control turned off
(initramfs) 

From the console I'm able to check that the RAID array seems to be OK:

 cat /proc/mdstat 
Personalities : [raid1] 
md2 : active raid1 sda3[0] sdb3[1]
      1953570816 blocks super 1.2 [2/2] [UU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

md1 : active raid1 sda2[0] sdb2[1]
      976431104 blocks super 1.2 [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>

the missing disk is the md1 device which is not present in /dev/md/ with root partition. Also config file /etc/mdadm/mdadm.conf show same content as mdadm --examine --scan:

$ mdadm --examine --scan
ARRAY /dev/md/1  metadata=1.2 UUID=c366b4e9:e33d2b69:3c738749:07b022c6 name=w02:1
ARRAY /dev/md/2  metadata=1.2 UUID=c32939b8:bc01f4ff:b85f00c6:b50aa29e name=w02:2

Using mdadm --examine /dev/sda2 I've check that all RAID partitions are in clean state (AA). Is there something more I can do?

Can I try to continue in manual booting? How to do that? How would I increase rootdelay= for next reboot? (the system waited for the right device, it's not the second suggested problem).

Best Answer

If you simply exit from the rescue shell the system will try to continue to boot. If you need to increase rootdelay you can add it to your kernel options in /etc/grub/default and run update-grub.

Related Question