Shrink MDADM Raid 5 containing LVM

lvmlvreducemdadmsynology

The plan:

I've been growing a Raid5 array in a personal server for some years, and the time has come to move to something more suited to this application. I've accumulated 6x 8tb drives containing media, backups, etc.

I bought a Synology 8 bay device to house all my drives, but now I'm trying to move the data across, which is where the trouble starts…

I bought one extra 8tb device, failed one device out of my Raid5, and used the two to create a Raid1 volume on the Synology. Synology allows upgrading Raid1 to Raid5 when adding another drive (or 5). I've moved across 8tb of data to this new drive, and used every bit of storage space I have laying around to clear up a total of 16tb from the original 40tb Raid volume.

The plan is now to reduce the original volume to 24tb, reshape the Raid5 to a 3+2 Raid6, fail the extra two drives and add them to the Synology, then move across most of the remaining space. Rince repeat and all the data should be accross.

Steps so far:

/dev/md127 is mounted on /srv/media, and contains vg_data/lv_media in LVM. First, unmount the volume.

umount /srv/media

Make sure the fs is healthy before proceeding

e2fsck -ff /dev/vg_data/lv_media

Reduce the logical volume in LVM by the free amount (16tb)

lvreduce -L -16t -r -v vg_data/lv_media

Inspect where LVM created segments (as I've allocated them over the years, lv_media is no longer contiguous)

pvs -v --segments /dev/md127

Some segments are towards the end of the physical volume, with gaps of free space in the middle. Manually move the trailing segments to gaps closer to the beginning of the drive. This could involve splitting segments or creating new ones, but in my case I just had to move one to free up enough space.

pvmove --alloc anywhere /dev/md127:4982784-5046783 /dev/md127:284672-348671

Resize the LVM PV to free up space on the RAID.

pvresize -v --setphysicalvolumesize 24000g /dev/md127

Shrink the RAID (before reshaping it)

mdadm --grow /dev/md127 --size=25769803776

And this is where I get stuck. The mdadm raid refuses to acknowledge / see that LVM is now using less space, and complains that it can't shrink the array past the already allocated space:

mdadm: Cannot set device size for /dev/md127: No space left on device

Any idea of how to safely reduce the size of the array?

Best Answer

@frostschutz You beauty! I don't know how I missed that, but I must have read too many man pages in the last week!

mdadm --grow /dev/md127 --array-size=25769803776

This is a temporary, safe, reversible way to check that you data is still intact after the size change. If it isn't , the size can be restored in the same way without affecting the data.

e2fsck -ff /dev/mapper/vg_data-lv_media

This check passed without error, so we're good to go. It looks like the change from Raid5 to Raid6 introduces additional complexity which is not needed for my migration, and the safest way to change the array shape is to have an additional spare in the process, so I went with

mdadm --grow /dev/md127 -l5 -n4 --backup-file /srv/cache/raid.backup

The backup file is on an SSD, hoping that it helps us along a little. It's going well so far, and I'll accept this answer if there is a happy ending to this story...

Related Question