Unfortunately the stripe cache only applies to RAID5 and 6 - there's no equivalent for RAID 0/1/10.
Performance of your individual drives (as per hdparm
) looks fine - they're all performing as expected for drives of that class.
My suggestions:
- Check that AHCI is enabled in the BIOS and that the internally-installed drives aren't using legacy IDE mode. There is a hacked BIOS for the MicroServer available that also enables AHCI for the eSATA port too (see this link for more info) - may be worth investigating for the drives in the external enclosure, although they'll still be limited by being behind a port multiplier.
- Enable NCQ for all drives and see if that makes a difference (it might, it might not).
- Make sure the filesystem settings are optimised (mounting noatime, nodiratime). You could also disable write barriers, but that may be too risky.
- Check if you see any benefit from switching I/O scheduler (noop may help here).
- Adjust the read-ahead buffer for both the md and LVM devices:
blockdev --setra <size> /dev/md1
for example (where <size>
is 512-byte sectors). That will only help reads though.
Two other things that can impact performance are partition alignment and filesystem creation parameters (stride, etc) but as you're using modern tools, that shouldn't be an issue.
Ok, I found the solution and can answer my own questions.
1) can I use LVM over RAID1 on a UEFI machine ?
Yes, definitely. And it will be able to boot even if one of the two disks fails.
2) How to do this ?
The're seem to be a bug in the installer, so just using the installer results in a failure to boot (grub shell).
Here is a working procedure:
1) manually create the following partitions on each of the two disks:
- a 512MB partition with type UEFI a the beginning of the disk
- a partition of type RAID after that
2) create your RAID 1 array with the two RAID partitions, then create your LVM volume group with that array, and your logical volumes (I created one for root, one for home and one for swap).
3) let the install go on, and reboot. FAILURE ! You should get a grub shell.
4) it might be possible to boot from the grub shell, but I choosed to boot from a rescue usb disk. In rescue mode, I opened a shell on my target root fs (that is the one on the root lvm logical volume).
5) get the UUID of this target root partition with 'blkid'. Note it down or take picture with your phone, you'll need it next step.
6) mount the EFI system partition ('mount /boot/efi') and edit the grub.cfg file: vi /boot/efi/EFI/ubuntu/grub.cfg
Here, replace the erroneous UUID with the one you got at point 5.
Save.
7) to be able to boot from the second disk, copy the EFI partition to this second disk:
dd if=/dev/sda1 of=/dev/sdb1 (change sda or sdb with whatever suits your configuration).
8) Reboot. In your UEFI setting screen, set the two EFI partitions as bootable, and set a boot order.
You're done. You can test, unplug one or the other of the disks, it should work !
Best Answer
Disk Utility (sitting in System -> Administration) will give you the serial numbers for all your disks.
Here's what I see (look at the top-right for the serial). You'll notice that this drive is within a mdadm RAID array. Disk Utility can penetrate the array for raw disk access.
I have 6 of the same model of disk in my PC so I drew a little diagram showing their position in the case and the serial number so I can locate them quickly on serial in an emergency.
The opposite is also true in that if a disk dies, I just need to find which disks are showing up and I can eliminate them until I know which serial is missing.
Edit: I'm trying to improve my bash-fu so I wrote this command line version to just give you a list of disk serial numbers that are current in your machine.
fdisk
may chuck out some errors but that doesn't taint the list:(And you can crumble that into one line if you need to - I've broken it up for readability)
Edit 2:
ls /dev/disk/by-id/
is somewhat easier ;)