First, it's not 100% clear that you booted the installer in EFI mode. If it booted in BIOS mode, it would try to install grub-pc (for BIOS-based systems), which wouldn't work if your firmware is set to boot the hard disk in EFI mode. I doubt if this is the problem, but I thought I'd toss it out as a possibility. You can check your boot mode by dropping to a shell and looking for the /sys/firmware/efi file; if it's present, you've booted in EFI mode. If not, you've probably booted in BIOS mode, although that's not 100% certain.
In any event, at this point your best bet is to do a manual installation of an EFI boot loader. IMHO, GRUB 2 (which is Ubuntu's default) is the worst possible choice; it's flaky and unreliable on EFI systems, in my experience. The easiest to get working is likely to be either ELILO or Fedora's patched GRUB Legacy. If you want to use a 3.3.0 or later kernel, it includes its own built-in EFI boot loader, which is quite reliable and can be very easy to use if paired with rEFInd. My Web page on EFI boot loaders describes all the options and includes installation instructions. Detailing them all here would be impractical.
With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.
In short
- Download the alternate server installer.
- Install with manual partitioning, EFI + RAID and LVM on RAID partition.
- Clone EFI partition from installed partition to the other drive.
- Install second EFI partition into UEFI boot chain.
- To avoid a lengthy wait during boot in case a drive breaks, remove the
btrfs
boot scripts.
In detail
1. Download the installer
2. Install with manual partitioning
- During install, at the
Partition disks
step, select Manual
.
- If the disks contain any partitions, remove them.
- If any logical volumes are present on your drives, select
Configure the Logical Volume Manager
.
- Choose
Delete logical volume
until all volumes have been deleted.
- Choose
Delete volume group
until all volume groups have been deleted.
- If any RAID device is present, select
Configure software RAID
.
- Choose
Delete MD device
until all MD devices have been deleted.
- Delete every partition on the physical drives by choosing them and selecting
Delete the partition
.
- Create physical partitions
- On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as:
EFI System Partition
.
- On each drive, create a second partition with 'max' size, Use as:
Physical Volume for RAID
.
- Set up RAID
- Select
Configure software RAID
.
- Select
Create MD device
, type RAID1
, 2 active disks, 0 spare disks, and select the /dev/sda2
and /dev/sdb2
devices.
- Set up LVM
- Select
Configure the Logical Volume Manager
.
- Create volume group
vg
on the /dev/md0
device.
- Create logical volumes, e.g.
swap
at 16G
root
at 35G
tmp
at 10G
var
at 5G
home
at 200G
- Set up how to use the logical partitions
- For the
swap
partition, select Use as: swap
.
- For the other partitions, select
Use as: ext4
with the proper mount points (/
, /tmp
, /var
, /home
, respectively).
- Select
Finish partitioning and write changes to disk
.
- Allow the installation program to finish and reboot.
If you are re-installing on a drive that earlier had a RAID configuration, the RAID creation step above might fail and you never get an md
device. In that case, you may have to create a Ubuntu Live USB stick, boot into that, run gparted
to clear all your partition tables, before you re-start this HOWTO.
3. Inspect system
4. Clone EFI partition
The EFI bootloaded should have been installed on /dev/sda1
. As that partition is not mirrored via the RAID system, we need to clone it.
sudo dd if=/dev/sda1 of=/dev/sdb1
5. Insert second drive into boot chain
This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.
- Run
efibootmgr -v
and notice the file name for the ubuntu
boot entry. On my install it was \EFI\ubuntu\shimx64.efi
.
- Run
sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi
. Depending on your shell, you might have to escape the backslashes.
- Verify with
efibootmgr -v
that you have the same file name for the ubuntu
and ubuntu2
boot items and that they are the first two in the boot order.
- Now the system should boot even if either of the drives fail!
7. Wait
If you want to try to physically remove or disable any drive to test your installation, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat
However, you may perform step 8 below while waiting.
8. Remove BTRFS
If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run
sudo apt-get purge btrfs-progs
This should remove btrfs-progs
, btrfs-tools
and ubuntu-server
. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.
9. Install the desktop version
Run sudo apt install ubuntu-desktop
to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!
10. Update EFI partition after grub-efi-amd64 update
When the package grub-efi-amd64
is updated, the files on the EFI partition (mounted at /boot/efi
) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64
is about to be updated, so you don't have to check after every update.
10.1 Find out clone source, quick way
If you haven't rebooted after the update, use
mount | grep boot
to find out what EFI partition is mounted. That partition, typically /dev/sdb1
, should be used as the clone source.
10.2 Find out clone source, paranoid way
Create mount points and mount both partitions:
sudo mkdir /tmp/sda1 /tmp/sdb1
sudo mount /dev/sda1 /tmp/sda1
sudo mount /dev/sdb1 /tmp/sdb1
Find timestamp of newest file in each tree
sudo find /tmp/sda1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sda1
sudo find /tmp/sdb1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sdb1
Compare timestamps
cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.\n"'
Should print /dev/sdb1 is newest
(most likely) or /dev/sda1 is newest
. That partition should be used as the clone source.
Unmount the partitions before the cloning to avoid cache/partition inconsistency.
sudo umount /tmp/sda1 /tmp/sdb1
10.3 Clone
If /dev/sdb1
was the clone source:
sudo dd if=/dev/sdb1 of=/dev/sda1
If /dev/sda1
was the clone source:
sudo dd if=/dev/sda1 of=/dev/sdb1
Done!
11. Virtual machine gotchas
If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1
(use FS1:
for /dev/sdb1
):
FS0:
\EFI\ubuntu\grubx64.efi
The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.
Best Answer
Ok, I found the solution and can answer my own questions.
1) can I use LVM over RAID1 on a UEFI machine ?
Yes, definitely. And it will be able to boot even if one of the two disks fails.
2) How to do this ?
The're seem to be a bug in the installer, so just using the installer results in a failure to boot (grub shell).
Here is a working procedure:
1) manually create the following partitions on each of the two disks: - a 512MB partition with type UEFI a the beginning of the disk - a partition of type RAID after that
2) create your RAID 1 array with the two RAID partitions, then create your LVM volume group with that array, and your logical volumes (I created one for root, one for home and one for swap).
3) let the install go on, and reboot. FAILURE ! You should get a grub shell.
4) it might be possible to boot from the grub shell, but I choosed to boot from a rescue usb disk. In rescue mode, I opened a shell on my target root fs (that is the one on the root lvm logical volume).
5) get the UUID of this target root partition with 'blkid'. Note it down or take picture with your phone, you'll need it next step.
6) mount the EFI system partition ('mount /boot/efi') and edit the grub.cfg file: vi /boot/efi/EFI/ubuntu/grub.cfg Here, replace the erroneous UUID with the one you got at point 5. Save.
7) to be able to boot from the second disk, copy the EFI partition to this second disk: dd if=/dev/sda1 of=/dev/sdb1 (change sda or sdb with whatever suits your configuration).
8) Reboot. In your UEFI setting screen, set the two EFI partitions as bootable, and set a boot order.
You're done. You can test, unplug one or the other of the disks, it should work !