Finally some progress!
dmraid
indeed was the culprit, as mdadm's Wikipedia entry suggested. I tried removing dmraid packages (and running update-initramfs
though I'm not sure if that was relevant).
After that, and rebooting, the devices under /dev/mapper
are gone (which is fine - I don't need to access the Windows NTFS partitions on Linux):
$ ls /dev/mapper/
control
And, most importantly, mdadm --create
works!
$ sudo mdadm -Cv -l1 -n2 /dev/md0 /dev/sda4 /dev/sdb4
mdadm: size set to 241095104K
mdadm: array /dev/md0 started.
I checked /proc/mdstat
and mdadm --detail /dev/md0
and both show that everything is fine with the newly created array.
$ cat /proc/mdstat
[..]
md0 : active raid1 sdb4[1] sda4[0]
241095104 blocks [2/2] [UU]
[==========>..........] resync = 53.1% (128205632/241095104)
finish=251.2min speed=7488K/sec
Then I created a filesystem on the new partition:
$ sudo mkfs.ext4 /dev/md0
And finally just mounted the thing under /opt
(& updated /etc/fstab
). (I could of course have used LVM here too, but frankly in this case I didn't see any point in that, and I've already wasted enough time trying to get this working...)
So now the RAID partition is ready to use, and I've got plenty of disk space. :-)
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc5 70G 52G 15G 79% /
/dev/md0 227G 188M 215G 1% /opt
Update: there are still some issues with this RAID device of mine. Upon reboot, it fails to mount even though I have it in fstab, and sometimes (after reboot) it appears to be in an inactive state and cannot be mounted even manually. See the follow-up question I posted.
If you rebooted now - it would just try booting into an empty Raid - which of course you can't do.
The MD data is stored on disk in a metadata sector, this contains everything the disk needs to tell the OS it exists.
If you have created this through the live CD then just using the Desktop installer will work after you have formatted the Raid using mkfs, but from what I remember you will need to create partitions for /boot and /swap
mke2fs -t ext4 /dev/"md device"
Best Answer
It is generally true that you need a separate /boot unless you want to boot the system on a single of the two RAID1 disks and then remount as md after the system is running or set up an appropriate initramfs.
From mdadm wiki:
Although it isn't your question, it may be useful to consult RAID Boot for more information on using initramfs to start a system booting from md volumes.