Linux – How to calculate exact size of partition and number of inodes to write a directory

duext4filesystemslinuxpartitioning

I need to write a directory with files (specifically – a Linux chroot) to a file with LVM image on it. The background of task is stupid, but I want to understand what is going on for now.
I calculate the size of directory with du:

# du -s --block-size=1 chroot
3762733056  chroot

I round it up and create a file large enough to encompass it:

# fallocate -l 4294967296 image.lvm
# ls -lah
drwxr-xr-x 23 root    root    4.0K мая 27 20:59 chroot
-rw-r--r--  1 root    root    4.0G мая 28 09:59 image.lvm

I mount (sorry, not sure for the right term) the file as loop device and create an LVM partition on it. I will use ext4 fs for it, I know that ext4 reserves 5% of space for root (I can tune it) and some space for inodes table, so I create a partition bigger than my actual directory by about 10% (4139006362 bytes) and round it up so it is multiple of 512 (4139006464 bytes) for LVM needs:

# losetup -f --show image.lvm
/dev/loop0
# pvcreate /dev/loop0
  Physical volume "/dev/loop0" successfully created.
# vgcreate IMAGE /dev/loop0
  Volume group "IMAGE" successfully created
# lvcreate --size 4139006464B -n CHROOT IMAGE
  Rounding up size to full physical extent <3.86 GiB
  Logical volume "CHROOT" created.

I then create an ext4 filesystem on this LV:

# mkfs.ext4 /dev/IMAGE/CHROOT
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 1010688 4k blocks and 252960 inodes
Filesystem UUID: fb3775ff-8380-4f97-920d-6092ae0cd454
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

# mount /dev/IMAGE/CHROOT mnt
# df --block-size=1 mnt
Filesystem                   1B-blocks         Used    Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT    4007591936     16179200   3767648256   1% /mnt

While 3767648256 is greater than 3762733056 that I got from du, I still tune it up a notch:

# tune2fs -m 0 /dev/IMAGE/CHROOT
tune2fs 1.45.6 (20-Mar-2020)
Setting reserved blocks percentage to 0% (0 blocks)
# df --block-size=1 mnt
Filesystem                1B-blocks     Used  Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 16179200 3974635520   1% /mnt

So far so good, let's write some data to it:

# cp -a chroot/. mnt/
...
cp: cannot create regular file 'mnt/./usr/portage/profiles/hardened/linux/powerpc/ppc64/32bit-userland/use.mask': No space left on device

Bang. Let's see what df shows:

# df --block-size=1 mnt
Filesystem                1B-blocks       Used Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 3587997696 402817024  90% /mnt

So there is actually space available. After googling it up a bit, I found out that you can run out of inodes on your partition, which seems exactly like my case:

# df -i mnt
Filesystem               Inodes IUsed IFree IUse% Mounted on
/dev/mapper/IMAGE-CHROOT   248K  248K     0  100% /mnt

And now, the question! I can easily use bigger file size, create 1.5x larger partition, write my files there, and it will work. But being the pedantic developer who wants to preserve the space: how do I calculate precisely how much bytes and inodes I will need for my directory to be written? I am also fairly certain I screw up with --block-size=1 somewhere along the way too.

The "why LVM" context: it is used for its snapshot capabilities. So basically other scripts create a 20G snapshot from said 4G chroot, do the stuff in this snapshot, and then remove it, leaving the original contents of chroot untouched. So the base filesystem may be considered readonly. "Simple" stupid docker containers invented long before Docker that cannot be easily replaced with Docker itself or its overlayfs.

Best Answer

mkfs.ext4 gives you three interesting options (see the man page for full details).

  • -i bytes-per-inode

    • Specify the bytes/inode ratio.
    • mke2fs creates an inode for every bytes-per-inode bytes of space on the disk.
    • The larger the bytes-per-inode ratio, the fewer inodes will be created.
  • -I inode-size

    • Specify the size of each inode in bytes.
    • The default inode size is controlled by the mke2fs.conf(5) file. In the mke2fs.conf file shipped with e2fsprogs
      • The default inode size is 256 bytes for most file systems,
      • Except for small file systems where the inode size will be 128 bytes.
  • -N number-of-inodes

    • Overrides the default calculation of the number of inodes that should be reserved for the filesystem (which is based on the number of blocks and the bytes-per-inode ratio).
    • This allows the user to specify the number of desired inodes directly.

Using a combination of these, you can precisely shape the filesystem. If you are confident that you'll never need to create any additional files or are mounting the filesystem as read-only, then you could theoretically give -N ${number-of-entities}.

$ truncate -s 10M ino.img
$ mkfs.ext4 -N 5 ino.img
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: done
Creating filesystem with 10240 1k blocks and 16 inodes
Filesystem UUID: 164876f1-bbfa-405f-8b2d-704830d7c165
Superblock backups stored on blocks:
        8193

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

$ mount -o loop ino.img ./mnt
$ df -i mnt
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/loop0         16    11     5   69% /home/attie/box/mnt
$ touch ./mnt/1
$ touch ./mnt/2
$ touch ./mnt/3
$ touch ./mnt/4
$ touch ./mnt/5
$ touch ./mnt/6
touch: cannot touch './mnt/6': No space left on device
$ df -B1 mnt
Filesystem     1B-blocks   Used Available Use% Mounted on
/dev/loop0       9425920 176128   8516608   3% /home/attie/box/mnt
$ df -i mnt
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/loop0         16    16     0  100% /home/attie/box/mnt

Remember that directories will take an inode too:

$ mkfs.ext4 -N 5 ino.img
mke2fs 1.44.1 (24-Mar-2018)
ino.img contains a ext4 filesystem
        last mounted on /home/attie/box/mnt on Thu May 28 09:08:41 2020
Proceed anyway? (y/N) y
Discarding device blocks: done
Creating filesystem with 10240 1k blocks and 16 inodes
Filesystem UUID: a36efc6c-8638-4750-ae6f-a900ada4330f
Superblock backups stored on blocks:
        8193

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

$ mount -o loop ino.img ./mnt
$ mkdir mnt/1
$ mkdir mnt/2
$ touch mnt/a
$ touch mnt/b
$ touch mnt/1/c
$ touch mnt/2/d
touch: cannot touch 'mnt/2/d': No space left on device

You can get a count of entities using find or similar, remembering to count directories too! (i.e: don't use -type f or -not -type d).

find "${source_dir}" | wc -l

Now that you know (or can specify) the inode size as well, you can determine much more precisely how much headroom you'll need to allocate, and you can avoid "wasting" space on unused inodes.


If you are using the filesystem read-only, then another option could be to look into squashfs instead of ext4, which will allocate a contiguous (and compressed) block based specifically on the input files... rather than creating a container that you hope is big enough and filling it.

And unless you're really after something from LVM, you can easily get away without it, as shown above (and I'd recommend not using it too). You might like / want an MBR, depending on how you'll be deploying the image.