Background
My objective is to use Packer to create an Amazon Machine Image (AMI) with several different paths mounted to different filesystems to improve security. E.g. /tmp
should be mounted to a filesystem with the noexec
option.
The fact that I want to create an automated process for making an AMI means that I can't execute re-mounting commands in the instance itself, so I am instead using the Packer amazon-chroot builder. This means I run an EC2 Instance, and run Packer from that EC2 Instance. Packer will then mount an EBS Volume taken from an EBS Snapshot used with a "source AMI". I now need to perform some operations on the mounted EBS Volume.
I am taking inspiration from a recent presentation on this topic whose slides are at http://wernerb.github.io/hashiconf-hardening.
My Question
When my EBS Volume (Block Device) is first mounted, here are the partitions I see on it from gdisk -l /dev/xvdf
:
Disk /dev/xvdf: 16777216 sectors, 8.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 726A877B-31D7-4C00-99E4-5A2CCB8E0EAD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 16777182
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 4096 16777182 8.0 GiB 8300 Linux
128 2048 4095 1024.0 KiB EF02 BIOS Boot Partition
I then perform the following operations:
- Delete the "Linux" partition with
sgdisk --delete 1 /dev/xvdf
- Create an LVM Volume Group with
lvm vgcreate -y main /dev/xvdf1
- Create a series of LVM Logical Volumes and format them each with a command like
/sbin/mkfs.ext4 -m0 -O ^64bit "/dev/main/lvroot"
- Mount them all and copy a bunch of files over
- Update
/etc/fstab
on the attached EBS Volume as follows (this is/mnt/ebs-volume/etc/fstab
from the perspective of my host system):
/etc/fstab I write to /dev/xvdf1:
#
/dev/mapper/main-lvroot / ext4 defaults,noatime 1 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/mapper/main-lvvar /var ext4 defaults 0 0
/dev/mapper/main-lvvarlog /var/log ext4 defaults 0 0
/dev/mapper/main-lvvarlog/audit /var/log/audit ext4 defaults 0 0
/dev/mapper/main-lvhome /home ext4 defaults 0 0
/dev/mapper/main-lvtmp /tmp ext4 defaults 0 0
Finally, Packer unmounts /dev/xvdf
and makes an Amazon Machine Image (AMI) based on the contents of that EBS Volume.
So far so good, except that when I go to launch a new AMI, it doesn't actually boot. I can't connect via SSH, and "View System Logs" via AWS shows nothing. So I'm assuming I'm messing something up around that "128" partition that contains the "BIOS Boot Partition". I'm also confused about how my LVM-created Logical Volumes are supposed to become "activated" when the new EC2 Instance boots up.
Basically, I'm missing a mental model for what needs to exist in that Boot partition and how the EC2 Instance can boot and run LVM if I've used LVM to create the root volume itself? I'm wondering if I need to create a special partition on /boot
, but what do I put in that? Should I in fact have three partitions on my /dev/xvdf
, the "BIOS Boot Partition", a "traditional" (ext4-formatted) partition for /boot
and an LVM-managed partition for everything else?
Best Answer
The issue turned out to be unrelated to LVM. As clarified in Why does this change render my Block Device unbootable?, the real issue is that by separating
/
and/boot
into two separate partitions, the MBR configuration was no longer correct. I was unable to update GRUB configuration files to fix this, so ultimately I had to keep/
and/boot
on the same partition and add my other partitions separately. Not ideal but it works.