Linux – How to Fix MDADM RAID Lost After Reboot

linuxmdadmraidsoftware-raidUbuntu

I am kind of scared right now, so I hope you can bring light on my problem!

A few weeks ago, I bought a new 2TB drive and decided to setup a software RAID 5 with MDADM on my HTPC (drive sdb, sdc and sde). So I've quickly search on Google and found this tutorial

I then proceed to follow the instruction, create a new array, watch /proc/mdstat for the status, etc. and after a couple of hours my array was complete! Joy everywhere, everything was good, and my files were happily accessible.

BUT!!

Yesterday, I had to shutdown my HTPC to change a fan. After reboot, oh my oh my, my RAID wasn't mounting. And since I'm quite a "noob" with mdadm, I'm totally lost.

When I'm doing an fdisk -l, here is the result :

xxxxx@HTPC:~$ sudo fdisk -l /dev/sdb /dev/sdc /dev/sde
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: FD6454FC-BD66-4AB5-8970-28CF6A25D840

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 3907028991 3907026944  1.8T Linux RAID


Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F94D03A1-D7BE-416C-8336-75F1F47D2FD1

Device     Start        End    Sectors  Size Type
/dev/sdc1   2048 3907029134 3907027087  1.8T Linux filesystem


Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

I'm more than confuse! For some reason, not only 2 of the 3 drives have a partition, those partitions are the one that I deleted in the first place when I followed the tutorial. The reason that /dev/sdb1 shows as "Linux RAID" is after following another solution on superuser (New mdadm RAID vanish after reboot) without success.

And here is the result after executing a mdadm --assemble :

xxxxx@HTPC:/etc/mdadm$ sudo mdadm --assemble --scan -v
mdadm: looking for devices for /dev/md0
mdadm: No super block found on /dev/dm-1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/dm-1
mdadm: No super block found on /dev/dm-0 (Expected magic a92b4efc, got 0000040e)
mdadm: no RAID superblock on /dev/dm-0
mdadm: cannot open device /dev/sr0: No medium found
mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got 00000401)
mdadm: no RAID superblock on /dev/sdd1
mdadm: No super block found on /dev/sdd (Expected magic a92b4efc, got d07f4513)
mdadm: no RAID superblock on /dev/sdd
mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got 00000401)
mdadm: no RAID superblock on /dev/sdc1
mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdc
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got 00000401)
mdadm: no RAID superblock on /dev/sdb1
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got f18558c3)
mdadm: no RAID superblock on /dev/sda1
mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 70ce7eb3)
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sde is identified as a member of /dev/md0, slot 2.
mdadm: no uptodate device for slot 0 of /dev/md0
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/sde to /dev/md0 as 2
mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.

I already checked with smartmontools and all the drives are "healty". Is there anything that could be done to save my data? After some research, it seems that tutorial wasn't the best one but….hell, everything was working for a time.

UPDATE:
By sheer luck, I found the exact command I used to create the array in my bash_history!

sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sde

Maybe, just maybe, should I run it again so that my RAID is bring back to life? My only concern is getting back "some" data on those drive. I'll make a blank sheet setup after that.

Best Answer

Well, it turns out that in a last hooray, I tried to re-run the "create" command I previously use to build the array in the first place and.....guess who got is data back!!

Let say I'm gonna backup all that good stuff and restart my array from scratch. Thanks everyone for the help!

Related Question