Ok, I found the solution and can answer my own questions.
1) can I use LVM over RAID1 on a UEFI machine ?
Yes, definitely. And it will be able to boot even if one of the two disks fails.
2) How to do this ?
The're seem to be a bug in the installer, so just using the installer results in a failure to boot (grub shell).
Here is a working procedure:
1) manually create the following partitions on each of the two disks:
- a 512MB partition with type UEFI a the beginning of the disk
- a partition of type RAID after that
2) create your RAID 1 array with the two RAID partitions, then create your LVM volume group with that array, and your logical volumes (I created one for root, one for home and one for swap).
3) let the install go on, and reboot. FAILURE ! You should get a grub shell.
4) it might be possible to boot from the grub shell, but I choosed to boot from a rescue usb disk. In rescue mode, I opened a shell on my target root fs (that is the one on the root lvm logical volume).
5) get the UUID of this target root partition with 'blkid'. Note it down or take picture with your phone, you'll need it next step.
6) mount the EFI system partition ('mount /boot/efi') and edit the grub.cfg file: vi /boot/efi/EFI/ubuntu/grub.cfg
Here, replace the erroneous UUID with the one you got at point 5.
Save.
7) to be able to boot from the second disk, copy the EFI partition to this second disk:
dd if=/dev/sda1 of=/dev/sdb1 (change sda or sdb with whatever suits your configuration).
8) Reboot. In your UEFI setting screen, set the two EFI partitions as bootable, and set a boot order.
You're done. You can test, unplug one or the other of the disks, it should work !
As it turns out, the true underlying issue was that added at the end of my question: since the machine doesn't boot from UEFI, grub requires a dedicated partition (2MB is reportedly more than enough) with the "bios_grub" flag at the beginning of each drive (each drive you plan on being able to boot from if the array ever became degraded, at least). One can set those up in the live-installer by choosing to use those partitions for bios boot
.
(The reason this fix didn't initially work for me was that I created the partitions using another live-cd prior to running the Ubuntu Server installer, which messed things up a little.)
I'd like note @kyodake's reminder regarding the necessity of installing grub to the MBR's of the rest of the disks in your RAID array (I find that manually running sudo grub-install /dev/sdX
is fastest). Finally, for completeness sake I'll stress that the reason for the separate /boot
partition is that this way one can encrypt the rest of the filesystem (as outlined in the guide I linked to, and summarized in my own partitioning scheme). If one isn't inclined to implement full-volume-encryption, there's really not a good reason to create a separate partition.
Best Answer
With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.
In short
btrfs
boot scripts.In detail
1. Download the installer
Install Ubuntu Server
.2. Install with manual partitioning
Partition disks
step, selectManual
.Configure the Logical Volume Manager
.Delete logical volume
until all volumes have been deleted.Delete volume group
until all volume groups have been deleted.Configure software RAID
.Delete MD device
until all MD devices have been deleted.Delete the partition
.EFI System Partition
.Physical Volume for RAID
.Configure software RAID
.Create MD device
, typeRAID1
, 2 active disks, 0 spare disks, and select the/dev/sda2
and/dev/sdb2
devices.Configure the Logical Volume Manager
.vg
on the/dev/md0
device.swap
at 16Groot
at 35Gtmp
at 10Gvar
at 5Ghome
at 200Gswap
partition, selectUse as: swap
.Use as: ext4
with the proper mount points (/
,/tmp
,/var
,/home
, respectively).Finish partitioning and write changes to disk
.If you are re-installing on a drive that earlier had a RAID configuration, the RAID creation step above might fail and you never get an
md
device. In that case, you may have to create a Ubuntu Live USB stick, boot into that, rungparted
to clear all your partition tables, before you re-start this HOWTO.3. Inspect system
Check which EFI partition has been mounted. Most likely
/dev/sda1
.mount | grep boot
Check RAID status. Most likely it is synchronizing.
cat /proc/mdstat
4. Clone EFI partition
The EFI bootloaded should have been installed on
/dev/sda1
. As that partition is not mirrored via the RAID system, we need to clone it.5. Insert second drive into boot chain
This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.
efibootmgr -v
and notice the file name for theubuntu
boot entry. On my install it was\EFI\ubuntu\shimx64.efi
.sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi
. Depending on your shell, you might have to escape the backslashes.efibootmgr -v
that you have the same file name for theubuntu
andubuntu2
boot items and that they are the first two in the boot order.7. Wait
If you want to try to physically remove or disable any drive to test your installation, you must first wait until the RAID synchronization has finished! Monitor the progress with
cat /proc/mdstat
However, you may perform step 8 below while waiting.8. Remove BTRFS
If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run
This should remove
btrfs-progs
,btrfs-tools
andubuntu-server
. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.9. Install the desktop version
Run
sudo apt install ubuntu-desktop
to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!10. Update EFI partition after grub-efi-amd64 update
When the package
grub-efi-amd64
is updated, the files on the EFI partition (mounted at/boot/efi
) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager thatgrub-efi-amd64
is about to be updated, so you don't have to check after every update.10.1 Find out clone source, quick way
If you haven't rebooted after the update, use
to find out what EFI partition is mounted. That partition, typically
/dev/sdb1
, should be used as the clone source.10.2 Find out clone source, paranoid way
Create mount points and mount both partitions:
Find timestamp of newest file in each tree
Compare timestamps
Should print
/dev/sdb1 is newest
(most likely) or/dev/sda1 is newest
. That partition should be used as the clone source.Unmount the partitions before the cloning to avoid cache/partition inconsistency.
10.3 Clone
If
/dev/sdb1
was the clone source:If
/dev/sda1
was the clone source:Done!
11. Virtual machine gotchas
If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from
/dev/sda1
(useFS1:
for/dev/sdb1
):The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.