Restoring the partition table and a fsarchiver backup from a dead disk to a new one

backupddgrubhard drivepartitioning

My openmediavault server's ssd disk died, and I replaced it with a new one (different brand, same capacity). Now I wanted to restore my last backup made with fsarchiver via the omv backup plugin, and I'm following this guide. After following the first 13 steps, I'm stuck with the last 2, where the critical things are done.

These were the partitions on my new ssd nvme disk before trying to restore (I had installed OMV on it):

Device         Boot     Start       End   Sectors   Size Id Type
/dev/nvme0n1p1           2048 486395903 486393856 231.9G 83 Linux
/dev/nvme0n1p2      486397950 488396799   1998850   976M  5 Extended
/dev/nvme0n1p5      486397952 488396799   1998848   976M 82 Linux swap / Solaris

After I ran the "restore grub and the partition table" step:

dd if=/mnt/array/Backup/omvbackup/backup-omv-30-ago-2021_03-00-01.grubparts of=/dev/nvme0n1

Now it looks like this:

Device         Boot Start       End   Sectors   Size Id Type
/dev/nvme0n1p1          1 488397167 488397167 232.9G ee GPT

And when I try to restore the main partition:

fsarchiver restfs backup-omv-30-ago-2021_03-00-01.fsa id=0,dest=/dev/nvme0n1p1

I get the following error:

oper_restore.c#152,convert_argv_to_strdicos(): "/dev/nvme0n1p1" is not a valid block device

So I think I messed the partition table. Maybe the grubparts are not written to /dev/nvme0n1 but to other place? Before trying to restore the partition table I could see GRUB installed with:

dd bs=512 count=1 if=/dev/nvme0n1 2>/dev/null | strings

But I can't see that anymore.

Edit: sizes of the different backup files:

-rw-r--r-- 1 root users        818 *.blkid
-rw-r--r-- 1 root users        590 *.fdisk
-rw-r--r-- 1 root users 5226895118 *.fsa
-rw-r--r-- 1 root users        446 *.grub
-rw-r--r-- 1 root users       1408 *.grubparts
-rw-r--r-- 1 root users       1035 *.packages

Best Answer

It looks like the .grubparts file is from the wrong disk. Your "old" partition list shows a normal MBR-format partition table, but what you restored looks like the "protective MBR" that is normally found on GPT-partitioned disks – it has the special partition of type 0xEE that usually indicates "you shouldn't be looking here, you should be looking at the GPT in sector 1 instead".

(The MBR is in sector 0, while the 'main' GPT occupies sectors 1-33 and the 'backup' GPT is at the end of the disk.)

Also, GPT disks are typically used with UEFI firmware, and the EFI boot process doesn't use the "boot sector" – it is normal for the protective MBR to be accompanied by a completely blank boot code area. (The bootloader for EFI systems is stored as a regular file in a regular partition.)

There are two options:

  • Look for another file that might have the correct partition table.

    (Also, after restoring a partition table using dd, you might need to explicitly tell the kernel to rescan it – otherwise the /dev nodes won't appear on their own. This can be done using partx -u, or partprobe, or by running fdisk and asking it to 'w'rite the partitions it found.)

  • Manually rebuild the partition table from scratch, by creating partitions using the "start" and "end" sector numbers that you conveniently have in the old 'fdisk' output.

    (You don't need to manually create the "extended" partition, just p1 as "primary" and p5 as "logical".)

Related Question