I need to write a directory with files (specifically – a Linux chroot) to a file with LVM image on it. The background of task is stupid, but I want to understand what is going on for now.
I calculate the size of directory with du
:
# du -s --block-size=1 chroot
3762733056 chroot
I round it up and create a file large enough to encompass it:
# fallocate -l 4294967296 image.lvm
# ls -lah
drwxr-xr-x 23 root root 4.0K мая 27 20:59 chroot
-rw-r--r-- 1 root root 4.0G мая 28 09:59 image.lvm
I mount (sorry, not sure for the right term) the file as loop device and create an LVM partition on it. I will use ext4 fs for it, I know that ext4 reserves 5% of space for root (I can tune it) and some space for inodes table, so I create a partition bigger than my actual directory by about 10% (4139006362 bytes) and round it up so it is multiple of 512 (4139006464 bytes) for LVM needs:
# losetup -f --show image.lvm
/dev/loop0
# pvcreate /dev/loop0
Physical volume "/dev/loop0" successfully created.
# vgcreate IMAGE /dev/loop0
Volume group "IMAGE" successfully created
# lvcreate --size 4139006464B -n CHROOT IMAGE
Rounding up size to full physical extent <3.86 GiB
Logical volume "CHROOT" created.
I then create an ext4 filesystem on this LV:
# mkfs.ext4 /dev/IMAGE/CHROOT
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done
Creating filesystem with 1010688 4k blocks and 252960 inodes
Filesystem UUID: fb3775ff-8380-4f97-920d-6092ae0cd454
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
# mount /dev/IMAGE/CHROOT mnt
# df --block-size=1 mnt
Filesystem 1B-blocks Used Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 16179200 3767648256 1% /mnt
While 3767648256 is greater than 3762733056 that I got from du
, I still tune it up a notch:
# tune2fs -m 0 /dev/IMAGE/CHROOT
tune2fs 1.45.6 (20-Mar-2020)
Setting reserved blocks percentage to 0% (0 blocks)
# df --block-size=1 mnt
Filesystem 1B-blocks Used Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 16179200 3974635520 1% /mnt
So far so good, let's write some data to it:
# cp -a chroot/. mnt/
...
cp: cannot create regular file 'mnt/./usr/portage/profiles/hardened/linux/powerpc/ppc64/32bit-userland/use.mask': No space left on device
Bang. Let's see what df
shows:
# df --block-size=1 mnt
Filesystem 1B-blocks Used Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 3587997696 402817024 90% /mnt
So there is actually space available. After googling it up a bit, I found out that you can run out of inodes on your partition, which seems exactly like my case:
# df -i mnt
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/IMAGE-CHROOT 248K 248K 0 100% /mnt
And now, the question! I can easily use bigger file size, create 1.5x larger partition, write my files there, and it will work. But being the pedantic developer who wants to preserve the space: how do I calculate precisely how much bytes and inodes I will need for my directory to be written? I am also fairly certain I screw up with --block-size=1
somewhere along the way too.
The "why LVM" context: it is used for its snapshot capabilities. So basically other scripts create a 20G snapshot from said 4G chroot, do the stuff in this snapshot, and then remove it, leaving the original contents of chroot untouched. So the base filesystem may be considered readonly. "Simple" stupid docker containers invented long before Docker that cannot be easily replaced with Docker itself or its overlayfs.
Best Answer
mkfs.ext4
gives you three interesting options (see the man page for full details).-i bytes-per-inode
-I inode-size
-N number-of-inodes
Using a combination of these, you can precisely shape the filesystem. If you are confident that you'll never need to create any additional files or are mounting the filesystem as read-only, then you could theoretically give
-N ${number-of-entities}
.Remember that directories will take an inode too:
You can get a count of entities using
find
or similar, remembering to count directories too! (i.e: don't use-type f
or-not -type d
).Now that you know (or can specify) the inode size as well, you can determine much more precisely how much headroom you'll need to allocate, and you can avoid "wasting" space on unused inodes.
If you are using the filesystem read-only, then another option could be to look into squashfs instead of ext4, which will allocate a contiguous (and compressed) block based specifically on the input files... rather than creating a container that you hope is big enough and filling it.
And unless you're really after something from LVM, you can easily get away without it, as shown above (and I'd recommend not using it too). You might like / want an MBR, depending on how you'll be deploying the image.