First check the disks, try running smart selftest
for i in a b c d; do
smartctl -s on -t long /dev/sd$i
done
It might take a few hours to finish, but check each drive's test status every few minutes, i.e.
smartctl -l selftest /dev/sda
If the status of a disk reports not completed because of read errors, then this disk should be consider unsafe for md1 reassembly. After the selftest finish, you can start trying to reassembly your array. Optionally, if you want to be extra cautious, move the disks to another machine before continuing (just in case of bad ram/controller/etc).
Recently, I had a case exactly like this one. One drive got failed, I re-added in the array but during rebuild 3 of 4 drives failed altogether. The contents of /proc/mdadm was the same as yours (maybe not in the same order)
md1 : inactive sdc2[2](S) sdd2[4](S) sdb2[1](S) sda2[0](S)
But I was lucky and reassembled the array with this
mdadm --assemble /dev/md1 --scan --force
By looking at the --examine output you provided, I can tell the following scenario happened: sdd2 failed, you removed it and re-added it, So it became a spare drive trying to rebuild. But while rebuilding sda2 failed and then sdb2 failed. So the events counter is bigger in sdc2 and sdd2 which are the last active drives in the array (although sdd didn't have the chance to rebuild and so it is the most outdated of all). Because of the differences in the event counters, --force will be necessary. So you could also try this
mdadm --assemble /dev/md1 /dev/sd[abc]2 --force
To conclude, I think that if the above command fails, you should try to recreate the array like this:
mdadm --create /dev/md1 --assume-clean -l5 -n4 -c64 /dev/sd[abc]2 missing
If you do the --create
, the missing
part is important, don't try to add a fourth drive in the array, because then construction will begin and you will lose your data. Creating the array with a missing drive, will not change its contents and you'll have the chance to get a copy elsewhere (raid5 doesn't work the same way as raid1).
If that fails to bring the array up, try this solution (perl script) here Recreating an array
If you finally manage to bring the array up, the filesystem will be unclean and probably corrupted. If one disk fails during rebuild, it is expected that the array will stop and freeze not doing any writes to the other disks. In this case two disks failed, maybe the system was performing write requests that wasn't able to complete, so there is some small chance you lost some data, but also a chance that you will never notice it :-)
edit: some clarification added.
fdisk
is the wrong tool for disks >2TB. Use parted
or gdisk
instead.
It appears that /dev/sdc1
and /dev/sdd1
are 2TB partitions, so that's what limits your array size. For the other disks, they have GPT so I assume they are 3TB already, but you should check.
Basically you have to stop the array, enlarge each partition to 3TB (without changing the starting offset), then start it again and follow it up with a grow:
mdadm --grow /dev/md0 --size=max
If you can't stop the array, you'll have to fail each 2TB partition individually, repartition and re-add it. This might go faster if you add a write-intent bitmap first.
mdadm --grow /dev/md0 --bitmap=internal
Then for each disk individually,
mdadm /dev/md0 --fail /dev/disk1 # check mdstat for [UUUU] first
mdadm /dev/md0 --remove /dev/disk1
parted /dev/disk -- mklabel gpt mkpart primary 1mib -1mib
mdadm /dev/md0 --re-add /dev/disk1
mdadm --wait /dev/md0 # must wait for sync
Once that's done you can remove the bitmap again (keeping it may harm performance).
mdadm --grow /dev/md0 --bitmap=none
mdadm --grow /dev/md0 --size=max
Finally do your resize2fs
or whatever.
Best Answer
First, you create the raid array. Assuming the new drives are sdc, sdd, and sde, and you don't already have any raid arrays, and you have created a single raid partition on each, do:
Then you add it to the vg, move the logical volumes over, and remove the existing pvs:
Now you will need to transfer your /boot partition, rebuild your initramfs, and reinstall grub to get the system able to boot from the new disks:
A menu will ask which disks grub should be installed to. Select sdc, sdd, and sde. Now you can shutdown and remove the old disks.