Ubuntu – Is It Possible To Recover A Partial LVM Logical Volume



It is an Ubuntu 12.04 VirtualBox VM with 5 virtual HDDs (VDI), NOTE this is just a test VM, so not well planned ahead:

  1. ubuntu.vdi for / (/dev/mapper/ubuntu-root AKA /dev/ubuntu/root) and /home (/dev/mapper/ubuntu-home)
  2. weblogic.vdi – /dev/sdb (mounted on /bea for weblogic and other stuff)
  3. btrfs1.vdi – /dev/sdc (part of btrfs -m raid1 -d raid1 configuration)
  4. btrfs2.vdi – /dev/sdd (part of btrfs -m raid1 -d raid1 configuration)
  5. more.vdi – /dev/sde (added this virtual HDD because / ran out of inodes and it wasn't easy to figure out what to delete so as to free up inodes, so I just added the new virtual HDD, created PV, added it to existing volume group ubuntu, grew the root logical volume to work around the inode issue -_-)

What happened?

Last Friday, before finishing up I wanted to free up some disk space on that box, for some reason I thought the more.vdi was useless and tried to detach it from the VM, I then clicked delete (should have clicked keep files damn!) by mistake when detaching. Unfortunately I didn't have backup for it. All too late.

What I have tried

Tried to undelete (use testdisk and photorec) the vdi files but it takes too long and recovered heaps of .vdi files that I didn't want (huge, filled the disk, damn!). I finally gave up. Fortunately most of data is on separate ext4 partition and btrfs volumes.

Out of curiosity, I still tried to mount the logical volumes and see if it is possible to at least recover the /var and /etc

I tried to use system rescue cd to boot and activate the volume groups, I got:

Couldn't find device with uuid xxxx.
Refusing activation of the partial LV root. Use --partial to override.
1 logical volume(s) in volume group "ubuntu" now active.
I was able to mount home LV but not root LV.

I am wondering if it is possible to access the root LV any more. Under the bonnet, data (on LV root – /) was striped to more.vdi (PV), I know it's almost impossible to to recover.

But I am still curious about how system administrator/DevOps guys deal with this sort of situation;-)

Thanks in advance.

Best Answer

[EDIT: If you are unfamiliar with LVM, please read this overview or at least the Wikipedia page about it.]

LVM splits its Physical Volumes in slices called "Physical Extents" (PE). You can check which Logical Volumes have PEs allocated in the PVs that you still have by issuing this command:

pvdisplay -m

Which outputs something like this (showing one PV here only):

--- Physical volume ---
  PV Name               /dev/sda3
  VG Name               htpcvg
  PV Size               929.27 GiB / not usable 18.71 MiB
  Allocatable           yes 
  PE Size               32.00 MiB
  Total PE              29736
  Free PE               1190
  Allocated PE          28546
  PV UUID               7SfjbY-3dy4-UDeB-iSEL-3R9Y-vvnv-O7m9jr

  --- Physical Segments ---
  Physical extent 0 to 28545:
    Logical volume      /dev/htpcvg/home
    Logical extents     29430 to 57975
  Physical extent 28546 to 29735:

With this info you can know how much of your LVs is stored in that PV.

Then, if that information shows that you have some extents of your LVs' available, you can activate the VG that contains them with this other command (edit VolGroupName): [EDIT: Please read the manpage]

vgchange -a y --partial VolGroupName

That should make your LVs "available", but faulty anyway (you won't be able to read missing PEs, obviously). To be safe, you could also make your defunct LVs read-only with lvchange -p r --partial /path/to/logical/volume

Now, given that you probably won't be able to mount the filesystems that reside in your partial LVs, you should copy what's left of them with ddrescue:

ddrescue -n /dev/mapper/yourVG-yourLV /file/for/the/dump ddrescue.log

And then do some forensic analysis on that dump. Here you have many options. Photorec is just one.

Related Question