After some weeks of experimenting and with some help from this link, I have finally found a solution that works. The sequence below was performed with Ubuntu 20.04.2.0 LTS.
In short
- Download and boot into Ubuntu Live for 20.04.
- Set up mdadm and lvm.
- Run the Ubuntu installer, but do not reboot.
- Add mdadm to target system.
- Clone EFI partition to second drive.
- Install second EFI partition into UEFI boot chain.
- Reboot
In detail
1. Download the installer and boot into Ubuntu Live
1.1 Download
1.2 Boot Ubuntu Live
- Boot onto the media from step 1.1.
- Select
Try Ubuntu
.
- Start a terminal by pressing Ctrl-Alt-T. The commands below should be entered in that terminal.
2. Set up mdadm and lvm
In the example below, the disk devices are called /dev/sda
and /dev/sdb
. If your disks are called something else, e.g., /dev/nvme0n1
and /dev/sdb
, you should replace the disk names accordingly. You may use sudo lsblk
to find the names of your disks.
2.0 Install ssh server
If you do not want to type all the commands below, you may install want to log in via ssh and cut-and-paste the commands.
Install
sudo apt install openssh-server
Set a password to enable external login
passwd
If you are testing this inside a virtual machine, you will probably want to forward a suitable port. Select Settings
, Network
, Advanced
, Port forwarding
, and the plus sign. Enter, e.g., 3022
as the Host Port
and 22
as the Guest Port and press OK
.
Now, you should be able to log onto your Ubuntu Live session from an outside computer using
ssh <hostname> -l ubuntu
or
ssh localhost -l ubuntu -p 3022
and the password you set above.
2.1 Create partitions on the physical disks
Zero the partition tables with
sudo sgdisk -Z /dev/sda
sudo sgdisk -Z /dev/sdb
Create two partitions on each drive; one for EFI and one for the RAID device.
sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sda
sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sda
sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sdb
sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sdb
Create a FAT32 system for the EFI partition on the first drive. (Will be cloned to the second drive later.)
sudo mkfs.fat -F 32 /dev/sda1
2.2 Install mdadm and create md device
Install mdadm
sudo apt-get update
sudo apt-get install mdadm
Create the md device. Ignore the warning about the metadata since the array will not be used as a boot device.
sudo mdadm --create /dev/md0 --bitmap=internal --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2
Check the status of the md device.
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[1] sda2[0]
1047918528 blocks super 1.2 [2/2] [UU]
[>....................] resync = 0.0% (1001728/1047918528) finish=69.6min speed=250432K/sec
bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices: <none>
In this case, the device is syncing the disks, which is normal and may continue in the background during the process below.
2.4 Partition the md device
sudo sgdisk -Z /dev/md0
sudo sgdisk -n 1:0:0 -t 1:E6D6D379-F507-44C2-A23C-238F2A3DF928 -c 1:"Linux LVM" /dev/md0
This creates a single partition /dev/md0p1
on the /dev/md0
device. The UUID string identifies the partition of be an LVM partition.
2.3 Create LVM devices
Create a physical volume on the md device
sudo pvcreate /dev/md0p1
Create a volume group on the physical volume
sudo vgcreate vg0 /dev/md0p1
Create logical volumes (partitions) on the new volume group. The sizes and names below are my choices. You may decide differently.
sudo lvcreate -Z y -L 25GB --name root vg0
sudo lvcreate -Z y -L 10GB --name tmp vg0
sudo lvcreate -Z y -L 5GB --name var vg0
sudo lvcreate -Z y -L 10GB --name varlib vg0
sudo lvcreate -Z y -L 200GB --name home vg0
Now, the partitions are ready for the Ubuntu installer.
3. Run the installer
- Double-click on the
Install Ubuntu 20.04.2.0 LTS
icon on the desktop of the new computer. (Do NOT start the installer via any ssh connection!)
- Answer the language and keyboard questions.
- On the
Installation type
page, select Something else
. (This is the important part.) This will present you with a list of partitions called /dev/mapper/vg0-home
, etc.
- Double-click on each partition starting with
/dev/mapper/vg0-
. Select Use as:
Ext4
, check the Format the partition
box, and choose the appropriate mount point (/
for vg0-root
, /home
for vg0-home
, etc., /var/lib
for vg0-varlib
).
- Select the first device
/dev/sda
for the boot loader.
- Press
Install Now
and continue the installation.
- When the installation is finished, select
Continue Testing
.
In a terminal, run lsblk
. The output should be something like this:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sda 8:0 0 1000G 0 disk
├─sda1 8:1 0 512M 0 part
└─sda2 8:2 0 999.5G 0 part
└─md0 9:0 0 999.4G 0 raid1
└─md0p1 259:0 0 999.4G 0 part
├─vg0-root 253:0 0 25G 0 lvm /target
├─vg0-tmp 253:1 0 10G 0 lvm
├─vg0-var 253:2 0 5G 0 lvm
├─vg0-varlib 253:3 0 10G 0 lvm
└─vg0-home 253:4 0 200G 0 lvm
sdb 8:16 0 1000G 0 disk
├─sdb1 8:17 0 512M 0 part
└─sdb2 8:18 0 999.5G 0 part
└─md0 9:0 0 999.4G 0 raid1
└─md0p1 259:0 0 999.4G 0 part
├─vg0-root 253:0 0 25G 0 lvm /target
├─vg0-tmp 253:1 0 10G 0 lvm
├─vg0-var 253:2 0 5G 0 lvm
├─vg0-varlib 253:3 0 10G 0 lvm
└─vg0-home 253:4 0 200G 0 lvm
...
As you can see, the installer left the installed system root mounted to /target
. However, mdadm
is not part of the installed system.
4. Add mdadm to the target system
4.1 chroot into the target system
First, we must mount the unmounted partitions:
sudo mount /dev/mapper/vg0-home /target/home
sudo mount /dev/mapper/vg0-tmp /target/tmp
sudo mount /dev/mapper/vg0-var /target/var
sudo mount /dev/mapper/vg0-varlib /target/var/lib
Next, bind some devices to prepare for chroot
...
cd /target
sudo mount --bind /dev dev
sudo mount --bind /proc proc
sudo mount --bind /sys sys
...and chroot
into the target system.
sudo chroot .
4.2 Update the target system
Now we are inside the target system. Install mdadm
apt install mdadm
If you get a dns error, do
echo "nameserver 1.1.1.1" >> /etc/resolv.conf
and repeat
apt install mdadm
You may ignore any warnings about pipe leaks.
Inspect the configuration file /etc/mdadm/mdadm.conf
. It should contain a line near the end similar to
ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6 name=ubuntu:0
Remove the name=...
part to have the line read like
ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6
Update the module list the kernel should load at boot.
echo raid1 >> /etc/modules
Update the boot ramdisk
update-initramfs -u
Finally, exit from chroot
exit
5. Clone EFI partition
Now the installed target system is complete. Furthermore, the main partition is protected from a single disk failure via the RAID device. However, the EFI boot partition is not protected via RAID. Instead, we will clone it.
sudo dd if=/dev/sda1 of=/dev/sdb1 bs=4096
Run
$ sudo blkid /dev/sd[ab]1
/dev/sda1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="ccc71b88-a8f5-47a1-9fcb-bfc960a07c16"
/dev/sdb1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="fd070974-c089-40fb-8f83-ffafe551666b"
Note that the FAT UUIDs are identical but the GPT PARTUUIDs are different.
6. Insert EFI partition of second disk into the boot chain
Finally, we need to insert the EFI partition on the second disk into the boot chain. For this we will use the efibootmgr
.
sudo apt install efibootmgr
Run
sudo efibootmgr -v
and study the output. There should be a line similar to
Boot0005* ubuntu HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Note the path after File
. Run
sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l '\EFI\ubuntu\shimx64.efi'
to create a new boot entry on partition 1 of /dev/sdb
with the same path as the ubuntu
entry. Re-run
sudo efibootmgr -v
and verify that there is a second entry called ubuntu2
with the same path as ubuntu
:
Boot0005* ubuntu HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Boot0006* ubuntu2 HD(1,GPT,fd070974-c089-40fb-8f83-ffafe551666b,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Furthermore, note that the UUID string of each entry is identical to the corresponding PARTUUID string above.
7. Reboot
Now we are ready to reboot. Check if the sync process has finished.
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[1] sda2[0]
1047918528 blocks super 1.2 [2/2] [UU]
bitmap: 1/8 pages [4KB], 65536KB chunk
unused devices: <none>
If the syncing is still in progress, it should be ok to reboot. However, I suggest to wait until the syncing is complete before rebooting.
After rebooting, the system should be ready to use! Furthermore, should either of the disks fail, the system would use the UEFI partition from the healthy disk and boot ubuntu with the md0
device in degraded mode.
Best Answer
I'm not able to test these steps to be able to verify 100%.
I think the way you can accomplish it, is by booting from the Ubuntu 13.04 DVD, or USB flash drive, and choosing try Ubuntu. Once the desktop comes up, then just press Ctrl+Alt+T on your keyboard to open Terminal. When it opens, run the command(s) below:
This saves you from having to type 'sudo' in front of every command. Then, start 'cfdisk' with the device name of the first harddisk:
Install mdadm and configure the RAID array.
To complete the setup, and for more information, and detailed steps click here or here.
Source:ubuntu-software-raid