Command ‘pvs’ says PV device not found, but LVs are mapped and mounted

lvmmd

I had problem with my system (faulty internal power cable). When I got the system back up and running, arrays rebuilding, etc, I seem to have a situation where the pvs command (and vgs and lvs) reports No device found for PV <UUID> but the logical volume which is on the supposedly missing physical volume can be successfully mounted as their DM devices exist and are mapped in /dev/mapper.

The PV device is an md-raid RAID10 array, which seems fine, except for the confusion that it's not appearing in the pvs output.

I assume this is a problem with some internal tables being out of sync. How do I get things mapped correctly (wthout a reboot, which, I assume would fix it)?


Update:

A reboot did NOT fix the problem. I believe the issue is due to the configuration of the 'missing' PV (/dev/md99) as a RAID10 far-2 array built from a 750b disk (/dev/sdk) and a RAID0 array (/dev/md90) build from a 250GB disk (/dev/sdh) and a 500Gb disk (/dev/sdl). It seems from the output of pvscan -vvv that the lvm2 signature is found on /dev/sdh, but not on /dev/md99.

    Asking lvmetad for VG f1bpcw-oavs-1SlJ-0Gxf-4YZI-AiMD-WGAErL (name unknown)
  Setting response to OK
  Setting response to OK
  Setting name to b
  Setting metadata/format to lvm2
    Metadata cache has no info for vgname: "b"
  Setting id to AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN
  Setting format to lvm2
  Setting device to 2160
  Setting dev_size to 1464383488
  Setting label_sector to 1
    Opened /dev/sdh RO O_DIRECT
  /dev/sdh: size is 488397168 sectors
    /dev/sdh: block size is 4096 bytes
    /dev/sdh: physical block size is 512 bytes
    Closed /dev/sdh
  /dev/sdh: size is 488397168 sectors
    Opened /dev/sdh RO O_DIRECT
    /dev/sdh: block size is 4096 bytes
    /dev/sdh: physical block size is 512 bytes
    Closed /dev/sdh
    /dev/sdh: Skipping md component device
  No device found for PV AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
    Allocated VG b at 0x7fdeb00419f0.
  Couldn't find device with uuid AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN.
    Freeing VG b at 0x7fdeb00419f0.

The only reference to /dev/md99, which should be the PV, is when it's added to the device cache.


Update 2:

Stopping lvm2-lvmetad and repeating the pvscan confirms than the issue is that the system is confused about which PVs to use as it's finding 2 with the same UUID

    Using /dev/sdh
    Opened /dev/sdh RO O_DIRECT
    /dev/sdh: block size is 4096 bytes
    /dev/sdh: physical block size is 512 bytes
  /dev/sdh: lvm2 label detected at sector 1
  Found duplicate PV AzKyTe5Ut4dxgqtxEc7V9vBkm5mOeMBN: using /dev/sdh not /dev/md99
    /dev/sdh: PV header extension version 1 found
  Incorrect metadata area header checksum on /dev/sdh at offset 4096
    Closed /dev/sdh
    Opened /dev/sdh RO O_DIRECT
    /dev/sdh: block size is 4096 bytes
    /dev/sdh: physical block size is 512 bytes
  Incorrect metadata area header checksum on /dev/sdh at offset 4096
    Closed /dev/sdh
    Opened /dev/sdh RO O_DIRECT
    /dev/sdh: block size is 4096 bytes
    /dev/sdh: physical block size is 512 bytes
    Closed /dev/sdh
  Incorrect metadata area header checksum on /dev/sdh at offset 4096
    Telling lvmetad to store PV /dev/sdh (AzKyTe-5Ut4-dxgq-txEc-7V9v-Bkm5-mOeMBN)
  Setting response to OK

since this configuration was only meant to be temporary, I think I'd do better to rearrange my disk usage.

Unless anyone can tell me how to explicitly override the order in which pvscan views devices?

Best Answer

The first thing to check are your filter and global_filter options in /etc/lvm/lvm.conf. Make sure you aren't filtering out the devices your PVs reside on.

The cache is set with the cache_dir option in the same file; on my Debian box it defaults to /run/lvm. The cache (if any) should be in that directory. If obtain_device_list_from_udev is set, I believe no cache is used.

Finally, check if use_lvmetad is set. If so, you may need to restart the LVM metadata daemon.

Related Question