Linux – Partition is missing in /dev

centoslinux

I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted).

My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working.

My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ?

Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev!

sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB)
sd 1:0:0:0: [sdb] Write Protect is off
sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 3:0:0:0: [sdc] Write Protect is off
sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdb:
 sdc:
sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sda:
DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 
DMAR:[fault reason 06] PTE Read access is not set
 sdb1 sdb2 sdb3
 sdc1
 sda1
sd 1:0:0:0: [sdb] Attached SCSI disk
sd 3:0:0:0: [sdc] Attached SCSI disk
 sda2 sda3
sd 0:0:0:0: [sda] Attached SCSI disk

Best Answer

I finally found the reason for this issue. The disk has been a member of Intel RAID array, and Intel's RAID signature survived re-partitioning and re-formatting in another computer:

mdadm -Evvv /dev/sdc

              Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
.................................................
[Archive Volume]:
           UUID : xxxx
     RAID Level : 1
        Members : 2
          Slots : [UU]

mdadm figured out that this disk belongs to a foreign RAID array, and even read Intel's metadata: volume name, RAID levels, etc. Of course all this data is stale and no longer true.

The fact that the disk was considered member of a foreign RAID was the reason this disk was not getting its partitions assigned in /dev.

How to fix

mdadm --zero-superblock /dev/sdc

Substitute your own device for /dev/sdc of course. This should be non-destructive to the filesystem already on the disk, at least my filesystem survived this without any issues. RAID superblock is usually in the last sector(s) of the disk.

Moral of this story

Always, always clean up the disk before taking it out of the RAID and re-using it somewhere else! Internet is abound with stories of foreign disk being assembled into live array, ruining it in the process. I was lucky and got this very minor problem.

Usually zeroing out first and last few sectors is enough. You should do it in the old system where the disk was originally used, or somewhere else while booted into rescue CD (if using software RAID only!).

Related Question