Lost superblock in md raid

mdadmsoftware-raidsuperblock

Issue on Red Hat Linux 5.

Due to some miscommunication two LUNs in our environment were enlarged from 1.2 TB to 1.7 TB.

Now, after a reboot, mdadm does not find the superblocks to build the array again.

The common format — known as version 0.90 — has a superblock that is
4K long and is written into a 64K aligned block that starts at least
64K and less than 128K from the end of the device (i.e. to get the
address of the superblock round the size of the device down to a
multiple of 64K and then subtract 64K).

I found some old documentation:

# mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Tue Jul 10 17:45:00 2012
     Raid Level : raid1
     Array Size : 1289748416 (1230.00 GiB 1320.70 GB)
  Used Dev Size : 1289748416 (1230.00 GiB 1320.70 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Apr 17 15:03:50 2013
         State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
  Spare Devices : 0

           UUID : 2799bd51:67eb54d2:1fcd3c90:293311a1
         Events : 0.39

    Number   Major   Minor   RaidDevice State
       0     253       10        0      active sync   /dev/dm-10
       1     253       11        1      active sync   /dev/dm-11

# fdisk -l /dev/dm-10 /dev/dm-11

Disk /dev/dm-10: 1320.7 GB, 1320702443520 bytes
255 heads, 63 sectors/track, 160566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-10 doesn't contain a valid partition table

Disk /dev/dm-11: 1320.7 GB, 1320702443520 bytes
255 heads, 63 sectors/track, 160566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-11 doesn't contain a valid partition table

Best Answer

Reverting the devices to their original size should just restore the RAID device.

You can confirm that by doing:

losetup --sizelimit=$((1230*(2**30))) -r /dev/loop1 /dev/dm-10
mdadm -E /dev/loop1

mdadm should now find the superblock if the size is correct. Then you can resize your disks back to that size (-r above is for read-only, so it won't do any harm).

If you actually do want to enlarge the md0 array and keep the 0.9 metadata, one thing you could do is:

dmsetup create d1 --table "0 $((1230*(2**30)/512)) linear /dev/dm-10 0"
dmsetup create d2 --table "0 $((1230*(2**30)/512)) linear /dev/dm-11 0"

Once again,

mdadm -E /dev/mapper/d1

should display the raid device.

Assemble the array on those mapper devices:

mdadm -A /dev/md0 /dev/mapper/d[12]

And then resize the devices to the full extent:

dmsetup suspend d1
dmsetup suspend d2
dmsetup reload d1 --table "0 $(blockdev --getsize /dev/dm-10) linear /dev/dm-10 0"
dmsetup reload d2 --table "0 $(blockdev --getsize /dev/dm-11) linear /dev/dm-11 0"
dmsetup resume d1
dmsetup resume d2

Then, you can use --grow to use that extra space:

mdadm /dev/md0 --grow --size max

Wait for the extra space to be resynced, stop the array, cleanup the extra dm devices and reassemble on the original devices:

mdadm --stop /dev/md0
dmsetup remove d1
dmsetup remove d2
mdadm -A /dev/md0 /dev/dm-1[01]

You can use loop devices to do tests beforehand to verify the procedure. Here's a screen capture of what I did to verify that it worked:

~# truncate -s 1230G a
~# truncate -s 1230G b
~# losetup /dev/loop1 a
~# losetup /dev/loop2 b
~# lsblk /dev/loop[12]
NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop1   7:32   0   1.2T  0 loop
loop2   7:64   0   1.2T  0 loop
~# mdadm --create /dev/md0 --metadata 0.9 --level 1 --raid-devices 2 --assume-clean /dev/loop[12]
mdadm: array /dev/md0 started.
~# lsblk /dev/md0
NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
md0    9:0    0   1.2T  0 raid1
~# truncate -s 1700G a
~# truncate -s 1700G b
~# losetup -c /dev/loop1
~# losetup -c /dev/loop2
~# lsblk /dev/loop[12]
NAME  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop1   7:32   0   1.7T  0 loop
└─md0   9:0    0   1.2T  0 raid1
loop2   7:64   0   1.7T  0 loop
└─md0   9:0    0   1.2T  0 raid1
~# mdadm -E /dev/loop1
mdadm: No md superblock detected on /dev/loop1.
(1)~# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
~# dmsetup create d1 --table "0 $((1230*(2**30)/512)) linear /dev/loop1 0"
~# dmsetup create d2 --table "0 $((1230*(2**30)/512)) linear /dev/loop2 0"
~# mdadm -A /dev/md0 /dev/mapper/d[12]
mdadm: /dev/md0 has been started with 2 drives.
~# lsblk /dev/mapper/d[12]
NAME       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
d1 (dm-19) 253:19   0   1.2T  0 dm
└─md0        9:0    0   1.2T  0 raid1
d2 (dm-20) 253:20   0   1.2T  0 dm
└─md0        9:0    0   1.2T  0 raid1
~# dmsetup suspend d1
~# dmsetup suspend d2
~# dmsetup reload d1 --table "0 $(blockdev --getsize /dev/loop1) linear /dev/loop1 0"
~# dmsetup reload d2 --table "0 $(blockdev --getsize /dev/loop2) linear /dev/loop2 0"
~# dmsetup resume d1
~# dmsetup resume d2
~# lsblk /dev/mapper/d[12]
NAME       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
d1 (dm-19) 253:19   0   1.7T  0 dm
└─md0        9:0    0   1.2T  0 raid1
d2 (dm-20) 253:20   0   1.7T  0 dm
└─md0        9:0    0   1.2T  0 raid1
~# mdadm /dev/md0 --grow --assume-clean --size max
mdadm: component size of /dev/md0 has been set to 1782579136K
~# lsblk /dev/md0
NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
md0    9:0    0   1.7T  0 raid1
~# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 dm-19[0] dm-20[1]
      1782579136 blocks [2/2] [UU]

unused devices: <none>
~# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
~# dmsetup remove d1
~# dmsetup remove d2
~# mdadm -A /dev/md0 /dev/loop[12]
mdadm: /dev/md0 has been started with 2 drives.
~# lsblk /dev/md0
NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
md0    9:0    0   1.7T  0 raid1
~# uname -rs
Linux 3.7-trunk-amd64
Related Question