I have a machine with UEFI BIOS. I want to install Ubuntu 20.04 desktop with LVM on top of RAID 1, so my system will continue to work even if one of the drives fail. I haven't found a HOWTO for that. The 20.04 desktop installer supports LVM but not RAID. The answer to this question describes the process for 18.04. However, 20.04 does not provide an alternate server installer. The answer to this question and this question describe RAID but not LVM nor UEFI. Does anyone have a process that works for 20.04 with LVM on top of RAID 1 for a UEFI machine?
System Installation – Install Ubuntu 20.04 with RAID 1 and LVM on UEFI BIOS
20.04lvmraidsystem-installationuefi
Related Solutions
With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.
In short
- Download the alternate server installer.
- Install with manual partitioning, EFI + RAID and LVM on RAID partition.
- Clone EFI partition from installed partition to the other drive.
- Install second EFI partition into UEFI boot chain.
- To avoid a lengthy wait during boot in case a drive breaks, remove the
btrfs
boot scripts.
In detail
1. Download the installer
- Download the alternate server installer from http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/
- Create a bootable CD or USB and boot the new machine from it.
- Select
Install Ubuntu Server
.
2. Install with manual partitioning
- During install, at the
Partition disks
step, selectManual
. - If the disks contain any partitions, remove them.
- If any logical volumes are present on your drives, select
Configure the Logical Volume Manager
.- Choose
Delete logical volume
until all volumes have been deleted. - Choose
Delete volume group
until all volume groups have been deleted.
- Choose
- If any RAID device is present, select
Configure software RAID
.- Choose
Delete MD device
until all MD devices have been deleted.
- Choose
- Delete every partition on the physical drives by choosing them and selecting
Delete the partition
.
- If any logical volumes are present on your drives, select
- Create physical partitions
- On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as:
EFI System Partition
. - On each drive, create a second partition with 'max' size, Use as:
Physical Volume for RAID
.
- On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as:
- Set up RAID
- Select
Configure software RAID
. - Select
Create MD device
, typeRAID1
, 2 active disks, 0 spare disks, and select the/dev/sda2
and/dev/sdb2
devices.
- Select
- Set up LVM
- Select
Configure the Logical Volume Manager
. - Create volume group
vg
on the/dev/md0
device. - Create logical volumes, e.g.
swap
at 16Groot
at 35Gtmp
at 10Gvar
at 5Ghome
at 200G
- Select
- Set up how to use the logical partitions
- For the
swap
partition, selectUse as: swap
. - For the other partitions, select
Use as: ext4
with the proper mount points (/
,/tmp
,/var
,/home
, respectively).
- For the
- Select
Finish partitioning and write changes to disk
. - Allow the installation program to finish and reboot.
If you are re-installing on a drive that earlier had a RAID configuration, the RAID creation step above might fail and you never get an md
device. In that case, you may have to create a Ubuntu Live USB stick, boot into that, run gparted
to clear all your partition tables, before you re-start this HOWTO.
3. Inspect system
Check which EFI partition has been mounted. Most likely
/dev/sda1
.mount | grep boot
Check RAID status. Most likely it is synchronizing.
cat /proc/mdstat
4. Clone EFI partition
The EFI bootloaded should have been installed on /dev/sda1
. As that partition is not mirrored via the RAID system, we need to clone it.
sudo dd if=/dev/sda1 of=/dev/sdb1
5. Insert second drive into boot chain
This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.
- Run
efibootmgr -v
and notice the file name for theubuntu
boot entry. On my install it was\EFI\ubuntu\shimx64.efi
. - Run
sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi
. Depending on your shell, you might have to escape the backslashes. - Verify with
efibootmgr -v
that you have the same file name for theubuntu
andubuntu2
boot items and that they are the first two in the boot order. - Now the system should boot even if either of the drives fail!
7. Wait
If you want to try to physically remove or disable any drive to test your installation, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat
However, you may perform step 8 below while waiting.
8. Remove BTRFS
If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run
sudo apt-get purge btrfs-progs
This should remove btrfs-progs
, btrfs-tools
and ubuntu-server
. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.
9. Install the desktop version
Run sudo apt install ubuntu-desktop
to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!
10. Update EFI partition after grub-efi-amd64 update
When the package grub-efi-amd64
is updated, the files on the EFI partition (mounted at /boot/efi
) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64
is about to be updated, so you don't have to check after every update.
10.1 Find out clone source, quick way
If you haven't rebooted after the update, use
mount | grep boot
to find out what EFI partition is mounted. That partition, typically /dev/sdb1
, should be used as the clone source.
10.2 Find out clone source, paranoid way
Create mount points and mount both partitions:
sudo mkdir /tmp/sda1 /tmp/sdb1
sudo mount /dev/sda1 /tmp/sda1
sudo mount /dev/sdb1 /tmp/sdb1
Find timestamp of newest file in each tree
sudo find /tmp/sda1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sda1
sudo find /tmp/sdb1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sdb1
Compare timestamps
cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.\n"'
Should print /dev/sdb1 is newest
(most likely) or /dev/sda1 is newest
. That partition should be used as the clone source.
Unmount the partitions before the cloning to avoid cache/partition inconsistency.
sudo umount /tmp/sda1 /tmp/sdb1
10.3 Clone
If /dev/sdb1
was the clone source:
sudo dd if=/dev/sdb1 of=/dev/sda1
If /dev/sda1
was the clone source:
sudo dd if=/dev/sda1 of=/dev/sdb1
Done!
11. Virtual machine gotchas
If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1
(use FS1:
for /dev/sdb1
):
FS0:
\EFI\ubuntu\grubx64.efi
The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.
First of all, it's likely safer to create the encrypted volumes in an extended, logical partition if using LVM on it later.
I've tried to format a partition with dm-integrity in Ubuntu 20.04 before opening the installer and while cryptsetup was able to open it, I could not create a volume group or filesystem on it, because mkfs.ext4
would fail and pvcreate
resulted in:
Error reading device /dev/mapper/sda5_crypt at 0 length 512.
Error reading device /dev/mapper/sda5_crypt at 0 length 4096.
Device /dev/mapper/sda5_crypt excluded by a filter.
The installer also did not know how to handle the partitions and wouldn't let me create any partitions on it.
I did not try this on a RAID device, but I doubt that would make it any better. I also noticed that dm-integrity creates two crypt devices as seen in lsblk
:
└─sda5 8:6 0 237.3G 0 part
└─sda5_crypt_dif 253:0 0 223.2G 0 crypt
└─sda5_crypt 253:1 0 223.2G 0 crypt
The filesystem creation worked fine on a regular luks device without integrity, so I assume that might be the issue.
When trying to open the dm-integrity device on a virtual console, even after loading all dm-crypt modules, I got the error:
Kernel doesn't support dm-integrity mapping.
I searched for the error online and found this blog entry, which deals with a very similar issue: https://kenta.blogspot.com/2019/07/ttvdpsoo-installing-ubuntu-with-luks2.html
The author suggests to:
- Install encrypted to the extent that the regular installer can do it.
- Reboot into a live CD.
- Basically, create an image of the entire encrypted system partitions
- Reformat the encrypted partitions with the integrity option
- Push the system images to the new partitions and update crypttab, initramfs
I haven't tried this and can't comment whether this works or not, but I can see that the live system would get the same errors on step 4 while trying to format the new partitions, so it would have to be a system on USB that can somehow format them correctly.
The author also mentions at the end that:
Unfortunately, these instructions do not work for Debian Buster (RC2) (and also probably later versions of Ubuntu), because of recent changes to cryptsetup, in particular /usr/share/initramfs/hooks/cryptroot . The first error ("Source mismatch") happens in print_crypttab_entry, where dmsetup info -c -o devnos_used returns a different major number when dealing with a dm_integrity device.
At the moment, this doesn't seem to be possible unfortunately, unless one can somehow copy and reformat the entire system without any further issues. Please feel free to correct me if I made an error or there is another option.
Related Question
- 18.04 System Installation – How to Install Ubuntu 18.04 Desktop with RAID 1 and LVM on UEFI BIOS
- Ubuntu – How to install Ubuntu on an encrypted, error-correcting RAID 1 device with dm-crypt
- Ubuntu Server 20.04 RAID1 + LVM Encrypted Partition – GRUB Problem Solution
- Ubuntu – Install Ubuntu 20.04 DESKTOP with Software RAID 1 on two disks
Best Answer
After some weeks of experimenting and with some help from this link, I have finally found a solution that works. The sequence below was performed with Ubuntu 20.04.2.0 LTS.
In short
In detail
1. Download the installer and boot into Ubuntu Live
1.1 Download
1.2 Boot Ubuntu Live
Try Ubuntu
.2. Set up mdadm and lvm
In the example below, the disk devices are called
/dev/sda
and/dev/sdb
. If your disks are called something else, e.g.,/dev/nvme0n1
and/dev/sdb
, you should replace the disk names accordingly. You may usesudo lsblk
to find the names of your disks.2.0 Install ssh server
If you do not want to type all the commands below, you may install want to log in via ssh and cut-and-paste the commands.
Install
sudo apt install openssh-server
Set a password to enable external login
passwd
If you are testing this inside a virtual machine, you will probably want to forward a suitable port. Select
Settings
,Network
,Advanced
,Port forwarding
, and the plus sign. Enter, e.g.,3022
as theHost Port
and22
as the Guest Port and pressOK
.Now, you should be able to log onto your Ubuntu Live session from an outside computer using
or
and the password you set above.
2.1 Create partitions on the physical disks
Zero the partition tables with
Create two partitions on each drive; one for EFI and one for the RAID device.
Create a FAT32 system for the EFI partition on the first drive. (Will be cloned to the second drive later.)
2.2 Install mdadm and create md device
Install
mdadm
Create the md device. Ignore the warning about the metadata since the array will not be used as a boot device.
Check the status of the md device.
In this case, the device is syncing the disks, which is normal and may continue in the background during the process below.
2.4 Partition the md device
This creates a single partition
/dev/md0p1
on the/dev/md0
device. The UUID string identifies the partition of be an LVM partition.2.3 Create LVM devices
Create a physical volume on the md device
sudo pvcreate /dev/md0p1
Create a volume group on the physical volume
sudo vgcreate vg0 /dev/md0p1
Create logical volumes (partitions) on the new volume group. The sizes and names below are my choices. You may decide differently.
Now, the partitions are ready for the Ubuntu installer.
3. Run the installer
Install Ubuntu 20.04.2.0 LTS
icon on the desktop of the new computer. (Do NOT start the installer via any ssh connection!)Installation type
page, selectSomething else
. (This is the important part.) This will present you with a list of partitions called/dev/mapper/vg0-home
, etc./dev/mapper/vg0-
. SelectUse as:
Ext4
, check theFormat the partition
box, and choose the appropriate mount point (/
forvg0-root
,/home
forvg0-home
, etc.,/var/lib
forvg0-varlib
)./dev/sda
for the boot loader.Install Now
and continue the installation.Continue Testing
.In a terminal, run
lsblk
. The output should be something like this:As you can see, the installer left the installed system root mounted to
/target
. However,mdadm
is not part of the installed system.4. Add mdadm to the target system
4.1 chroot into the target system
First, we must mount the unmounted partitions:
Next, bind some devices to prepare for
chroot
......and
chroot
into the target system.4.2 Update the target system
Now we are inside the target system. Install
mdadm
If you get a dns error, do
and repeat
You may ignore any warnings about pipe leaks.
Inspect the configuration file
/etc/mdadm/mdadm.conf
. It should contain a line near the end similar toRemove the
name=...
part to have the line read likeUpdate the module list the kernel should load at boot.
Update the boot ramdisk
Finally, exit from chroot
5. Clone EFI partition
Now the installed target system is complete. Furthermore, the main partition is protected from a single disk failure via the RAID device. However, the EFI boot partition is not protected via RAID. Instead, we will clone it.
Run
Note that the FAT UUIDs are identical but the GPT PARTUUIDs are different.
6. Insert EFI partition of second disk into the boot chain
Finally, we need to insert the EFI partition on the second disk into the boot chain. For this we will use the
efibootmgr
.Run
and study the output. There should be a line similar to
Note the path after
File
. Runto create a new boot entry on partition 1 of
/dev/sdb
with the same path as theubuntu
entry. Re-runand verify that there is a second entry called
ubuntu2
with the same path asubuntu
:Furthermore, note that the UUID string of each entry is identical to the corresponding PARTUUID string above.
7. Reboot
Now we are ready to reboot. Check if the sync process has finished.
If the syncing is still in progress, it should be ok to reboot. However, I suggest to wait until the syncing is complete before rebooting.
After rebooting, the system should be ready to use! Furthermore, should either of the disks fail, the system would use the UEFI partition from the healthy disk and boot ubuntu with the
md0
device in degraded mode.