Though it was down voted ... possibly because someone thought it was not answering the question ... I think @Rony's answer is a good start at explaining what the boot
flag is about. (I was actually planning to begin my answer with an example similar to the one he provided.)
I was all set to ramble off an answer about how the boot
flag is, at this point in time, an often ignored (as @Rony's example shows) historical remnant from a period when hard drives were smaller and bootloaders were much less sophisticated.
But then I discovered this had already been said in this answer to this question: What is the "Bootable flag" option when installing a distro?
What's more there was also a link to a short article about the Boot flag which says
- "Its primary function is to indicate to a MS-DOS/MS Windows-type boot loader which partition to boot. In some cases it is used by Windows XP/2000 to assign the active partition the letter "C:"."
Well, this is embarrassing ...
When I claimed that the boot
flag was a "historical remnant" I was assuming this was the case because clearly GRUB had no need to use it. Surely Microsoft would also have "moved on".
The well known quote usually attributed to Oscar Wilde turned out to be too true in this instance.
It appears that the MBR and PBR (Partition Boot Record) loaders used by the Windows operating systems DO expect the boot
flag to be set correctly.
To test this I cleared the boot flag from all the partitions of a Windows 8 VM. (See below. If you're curious, here's a link to the pastebin of the complete BootInfo Script result)
Drive: sda
Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
Partition Boot Start Sector End Sector # of Sectors Id System
/dev/sda1 2,048 718,847 716,800 7 NTFS / exFAT / HPFS
/dev/sda2 718,848 52,426,751 51,707,904 7 NTFS / exFAT / HPFS
When I cleared the flag from both partitions, I got the error message FATAL: INT18: BOOT FAILURE
when I attempted to boot. (I am not sure if that is from the Windows MBR bootloader or the VM's equivalent of a BIOS.)
Just to see what would happen, I also set the boot
flag on the "wrong" partition, /dev/sda2
instead of /dev/sda1
. Doing that resulted in the window shown in the image below.
<sigh/>
This experience makes me wonder if Microsoft is still using the same MBR boot sector loader which they used for MS-DOS and Windows 3.0/3.1?
The first one reports the UUID of the ext4 filesystem on the md
block device. It helps the system identify the file system uniquely among the filesystems available on the system. That is stored in the structure of the filesystem, that is in the data stored on the md device.
The second one is the UUID of the RAID device. It helps the md subsystem identify that particular RAID device uniquely. In particular, it helps identify all the block devices that belong to the RAID array. It is stored in the metadata of the array (on each member). Array members also have their own UUID (in the md system, they may also have partition UUIDs if they are GPT partitions (which itself would be stored in the GPT partition table), or LVM volumes...).
blkid
is a bit misleading, as what it returns is the ID of the structure stored on the device (for those kind of structures it knows about like most filesystems, LVM members and swap devices). Also note that it's not uncommon to have block devices with structures with identical UUIDs (for instance LVM snapshots). And a block device can contain anything, including things whose structure doesn't include a UUID.
So, as an example, you could have a system with 3 drives, with GPT partitioning. Those drives could have a World Wide Name which identifies it uniquely. Let's say the 3 drives are partitioned with one partition each (/dev/sd[abc]1
). Each partition will have a GPT UUID stored in the GPT partition table.
If those partitions make up a md RAID5 array. Each will get a md UUID as a RAID member, and the array will get a UUID as md RAID device.
That /dev/md0
can be further partitioned with MSDOS or GPT-type partitioning. For instance, we could have a /dev/md0p1
partition with a GPT UUID (stored in the GPT partition table that is stored in the data of /dev/md0).
That could in turn be a physical volume for LVM. As such it will get a PV UUID. The volume group will also have a VG UUID.
In that volume group, you would create logical volumes, each getting a LV UUID.
On one of those LVs (like /dev/VG/LV
), you could make an ext4 filesystem. That filesystem would get an ext4 UUID.
blkid /dev/VG/LV
would get you the (ext4) UUID of that filesystem. But as a partition inside the VG volume, it would also get a partition UUID (some partitioning scheme like MSDOS/MBR don't have UUIDs). That volume group is made of members PVs which are themselves other block devices. blkid /dev/md0p1
would give you the PV UUID. It also has a partition UUID in the GPT table on /dev/md0
. /dev/md0
itself is made off other block devices. blkid /dev/sda1
will return the raid-member UUID. It also has a partition UUID in the GPT table on /dev/sda
.
Best Answer
There are two separate things:
the filesystem, a data structure that provides a way to store distinct named files, and
the block device (disk, partition, LVM volume) on inside of which the filesystem lies
resize2fs
resizes the filesystem, i.e. it modifies the data structures there to make use of new space, or to fit them in a smaller space. It doesn't affect the size of the underlying device.lvresize
resizes an LVM volume, but it doesn't care at all what lies within it.So, to reduce a volume, you have to first reduce the filesystem to a new size (
resize2fs
), and after that you can resize the volume to the new size (lvresize
). Doing it the other way would trash the filesystem when the device was resized.But to increase the size of a volume, you first resize the volume, and then the filesystem. Doing it the other way, you couldn't make the filesystem larger since there was no new space for it to use (yet).