GRUB Legacy complaining about filesystem being unknown

filesystemsgrub-legacykvmopenstack

I have several VM's in an OpenStack cluster (KVM) and when they get built from an image that had say a single 5GB partition they too will have this same HDD geometry configured. I've found several methods to resize them from the actual OpenStack hosts, but I'd like to be able to resize them from within the VM's as well, just so I have that method available too.

One approach would be to use fdisk to delete and then recreate & write out the partition metadata, then do a reboot and a resizefs after booting back into the VM. When I recently tried this though it didn't work as expected. Resulting in a VM that is hanging at a GRUB prompt. This is a CentOS 6.7 VM so the bootloader is GRUB legacy.

           ss1

What are my options here to get a filesystem on this VM? I suspect I could use virtmanager to gain access to the VM and then expose a LiveCD ISO at it so that I can "boot" it with that and then get a filesystem associated but is there a more direct way to recover access and boot this VM?

References

Best Answer

So my issue was with how I was doing the deletion and recreation of the partition. I was getting tripped up by fdisk and the fact that it was showing the starting location not in sectors. When I properly invoked fdisk like so:

$ sudo fdisk -c -u /dev/vda

Command (m for help): p

Disk /dev/vda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004064e

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    31459327    15728640   83  Linux

It was pretty obvious that I was not keeping the starting sector aligned when I created the new partition.

From fdisk's usage guidance:

Options:
 -c                        switch off DOS-compatible mode
 -u <size>                 give sizes in sectors instead of cylinders

So simply paying special attention to this detail, and I was able to do the following process to extend my VM's partition using all of the available HDD space.

Process for resizing

To delete the existing partition:

Command (m for help): d
Selected partition 1

Now add the new one:

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-83886079, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-83886079, default 83886079):
Using default value 83886079

Make it bootable:

Command (m for help): a
Partition number (1-4): p
Partition number (1-4): 1

And confirm all this:

Command (m for help): p

Disk /dev/vda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004064e

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83886079    41942016   83  Linux

Commit it to the HDD:

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

Now reboot the system and do a resize2fs if needed:

$ sudo resize2fs /dev/vda1
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 10485504 blocks long.  Nothing to do!

And confirm:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G  807M   37G   3% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
Related Question