I have an NVidia ION board with 4 SATA ports and want to use that to run a Linux Server (CentOS 5.4). I first hooked up 3 HDs (that will be a RAID5 array) and a fourth small boot HD.
I first started to use the onboard RAID capability but that does not work correctly under Linux: the RAID capacity is not a real RAID but uses lvm to define some arays.
After setting the BIOS back to normal SATA mode and whiping the HDs, the first boot harddisk (/dev/sda) is seen as /dev/sda BEFORE mounting and after mounting as /dev/mapper/nvidia_. CentOS is unable to install on it (and grub is not installable on it either).
So somehow the harddisk is still seen as if it belongs to some lvm volume. I tried to clean out the HD by issuing a few dd if=/dev/zero of=/dev/sda
commands to wipe the starting cylinders and final cylinders but to no avail.
Did anyone see this problem and did anyone find a solution?
UPDATE
When I create only a single ext3 partition on the first HD (/dev/mapper/nvidia_…) no LVM partitions are seen and I can boot from /dev/mapper/nvidia_…. Now the next step is to see how I can get rid of this folly.
Best Answer
I think your problem has more to do with
dmraid
than LVM (see this note about a similar problem).dmraid
is the Linux fakeRAID facility. It and LVM (and MD RAID, Linux's software RAID facility) use /dev/mapper devices, but as far as I know, LVM requires a standard partition on the disk to use as a physical volume (PV). /dev/mapper/nvidia_* probably refers to a fakeRAID set on an NVidia chipset (using the sata_nv kernel module).Under this theory, what's happening is that your kernel is detecting the presence of that old RAID metadata on the drives and auto-configuring the device mapper (via
dmraid
) to use them. If it was LVM, I think you'd be able to tell withfdisk -l /dev/sda
.If your goal is to get back to a plain-jane /dev/sda style disk access, you'll need to:
Verify that DMraid or LVM are in use.
dmraid -s
ordmraid -r
pvscan
orvgscan
(?)dmsetup ls
to query the device mapper directly.If one or the other are in use, use those configuration tools to remove them.
dmraid -an
but this may not be enough. The manpage suggestsdmraid -r -E
can erase metadata, so that might be necessary.pvremove
orvgremove
(or both)dmsetup remove
ordmsetup remove_all
to delete devices from the device mapper driver.??
In short, you may have to play with the
dmraid
,dmsetup
and various LVM commands to see why your system insists on activating the device mapper.lsmod
might be helpful to identify kernel modules in use so you can shut them down if necessary.See also:
dmraid(8)
anddmsetup(8)
manpages