You should always have a simple partitioning scheme that leaves space for the boot loader, good old DOS MBR works the best for this purpose plus guards the disk from being treated as unformatted when attached to windows machine. Even the fancy new GPT format uses MBR as the basis.
The GRUB2 bootloader is capable of booting from stuff like RAID and LVM and combination of those. But it needs a place to install itself, which typically consists of a code block in MBR and more code in the gap between MBR and the first partition. Current versions of fdisk
and similar tools already create a large enough gap (the first partition typically starts on a megabyte boundary).
If you're using just one disk, you can format it using the DOS disklable (o
command in fdisk
) and create one partition spanning the disk space (n
command in fdisk
, p
for primary, confirm the default start and end). Then format the partition as LVM physical volume, and the rest you know already.
When multiple disks are used, the partition is typically formatted for RAID instead of LVM, a raid array is assembled and formatted as LVM physical volume and the rest is the same again.
Does the LV become mountable if you do a sudo vgscan
and sudo vgchange -ay
? If those commands result in errors, you probably have a different problem and should probably add those error messages in your original post.
But if the LV becomes ready for mounting after those commands, read on...
The LVM logical volume pathname (e.g. /dev/mapper/vgNAME-lvNAME
) in /etc/fstab
alone won't give the system a clue that this particular filesystem cannot be mounted until networking and iSCSI have been activated.
Without that clue, the system will assume that filesystem is on a local disk and will attempt to mount it as early as possible, normally before networking has been activated, which will obviously fail with an iSCSI LUN. So you'll need to supply that clue somehow.
One way would be to add _netdev
to the mount options for that filesystem in /etc/fstab
. From this Ubuntu help page it appears to be supported on Ubuntu. This might actually also trigger a vgscan
or similar detection of new LVM PVs (+ possibly other helpful stuff) just before the attempt to mount any filesystems marked with _netdev
.
Another way would be to use the systemd-specific mount option x-systemd.requires=<iSCSI initiator unit name>
. That should achieve the same thing, by postponing any attempts to mount that filesystem until the iSCSI initiator has been successfully activated.
When the iSCSI initiator activates, it will automatically make any configured LUNs available, and as they become available, LVM should auto-activate any VGs on them. So, once you get the mount attempt postponed, that should be enough.
The lack of PARTUUID is a clue that the disk/LUN does not have a GPT partition table. Since /dev/sdc
is listed as TYPE="LVM2_member"
it actually does not have any partition table at all. In theory, it should cause no problems for Linux, but I haven't personally tested an Ubuntu 18.04 system with iSCSI storage, so cannot be absolutely certain.
The problem with disks/LUNs with no partition table is that other operating systems won't recognize the Linux LVM header as a sign that the disk is in use, and will happily overwrite it with minimal prompting. If your iSCSI storage administrator has accidentally presented the storage LUN corresponding to your /dev/sdc
to another system, this might have happened.
You should find the LVM configuration backup file in /etc/lvm/backup
directory that corresponds to your missing VG, and read it to find the expected UUID of the missing PV. If it matches what blkid
reports, ask your storage administrator to double-check his/her recent work for mistakes like described above. If it turns out the PV has been overwritten by some other system, any remaining data on the LUN is likely to be more or less corrupted and it would be best to restore it from backup... once you get a new, guaranteed-unconflicted LUN from your iSCSI admin.
If it turns out the actual UUID of /dev/sdc
is different from expected, someone might have accidentally run a pvcreate -f /dev/sdc
somehow. If that's the only thing that has been done, that's relatively easy to fix. (NOTE: check man vgcfgrestore
chapter REPLACING PHYSICAL VOLUMES for updated instructions - your LVM tools may be newer than mine.) First restore the UUID:
pvcreate --restorefile /etc/lvm/backup/<your VG backup file> --uuid <the old UUID of /dev/sdc from the backup file> /dev/sdc
Then restore the VG configuration:
vgcfgrestore --file /etc/lvm/backup/<your VG backup file> <name of the missing VG>
After this, it should be possible to activate the VG, and if no other damage has been done, mount the filesystem after that.
Best Answer
Can you add the links you are referring to? Because just mirroring does not need a log. A log (on the same or extra device) is usually involved when you use a journaling filesystem - if you use mirroring or not on the layer below (i.e. the blocklayer).
Update: Ok, with the links things are clearer now. The LVM mirroring seems to be quite different from the linux md (RAID 1) mirroring.
To quote from the lvcreate man page:
Thus, with a memory based log you get a significant performance hit at startup and a performance hit, when the log physical volume is on the same hardware disk.
Googling around, mirroring using Linux
mdadm
seems to be the better approach ATM. (You can use the md device as physical device for some lvm setup.)First, it does not need an extra log (and does not do an expensinve resync at every startup).
Second, lvm mirrors does not seem to support parallel reading, i.e. md mirrors should have better read performance:
https://serverfault.com/questions/97845/lvm-mirroring-vs-raid1
https://serverfault.com/questions/126851/linux-lvm-mirror-vs-md-mirror