Well, first, what is an inode? In the Unix world, an inode is a kind of file entry. A filename in a directory is just a label (a link!) to an inode. An inode can be referenced in multiple locations (hardlinks!).
-i bytes-per-inode (aka inode_ratio)
For some unknown reason this parameter is sometime documented as bytes-per-inode and sometime as inode_ratio. According to the documentation, this is the bytes/inode ratio. Most human will have a better understanding when stated as either (excuse my english):
- 1 inode for every X bytes of storage (where X is bytes-per-inode).
- lowest average-filesize you can fit.
The formula (taken from the mke2fs
source code):
inode_count = (blocks_count * blocksize) / inode_ratio
Or even simplified (assuming "partition size" is roughly equivalent to blocks_count * blocksize
, I haven't checked the allocation):
inode_count = (partition_size_in_bytes) / inode_ratio
Note 1: Even if you provide a fixed number of inode at FS creation time (mkfs -N ...
), the value is converted into a ratio, so you can fit more inode as you extend the size of the filesystem.
Note 2: If you tune this ratio, make sure to allocate significantly more inode than what you plan to use... you really don't want to reformat your filesystem.
-I inode-size
This is the number of byte the filesystem will allocate/reserve for each inode the filesystem may have. The space is used to store the attributes of the inode (read Intro to Inodes). In Ext3, the default size was 128. In Ext4, the default size is 256 (to store extra_isize
and provide space for inline extended-attributes). read Linux: Why change inode size?
Note: X bytes of disjkspace is allocated for each allocated inode, whether is free or used, where X=inode-size.
Possibly the simplest solution is to heavily overprovision the space initially, copy all the files, then use resize2fs -M
to reduce the size to the minimum this utility can manage. Here's an example:
dir=/home/meuh/some/dir
rm -f /tmp/image
size=$(du -sb $dir/ | awk '{print $1*2}')
truncate -s $size /tmp/image
mkfs.ext4 -m 0 -O ^64bit /tmp/image
sudo mount /tmp/image /mnt/loop
sudo chown $USER /mnt/loop
rsync -a $dir/ /mnt/loop
sync
df /mnt/loop
sudo umount /mnt/loop
e2fsck -f /tmp/image
resize2fs -M /tmp/image
newsize=$(e2fsck -n /tmp/image | awk -F/ '/blocks$/{print $NF*1024}')
truncate -s $newsize /tmp/image
sudo mount /tmp/image /mnt/loop
df /mnt/loop
diff -r $dir/ /mnt/loop
sudo umount /mnt/loop
Some excerpts from the output for an example directory:
+ size=13354874
Creating filesystem with 13040 1k blocks and 3264 inodes
+ df /mnt/loop
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop1 11599 7124 4215 63% /mnt/loop
+ resize2fs -M /tmp/image
Resizing the filesystem on /tmp/image to 8832 (1k) blocks.
+ newsize=9043968
+ truncate -s 9043968 /tmp/image
+ df /mnt/loop
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop1 7391 7124 91 99% /mnt/loop
Best Answer
ext4
has amax_dir_size_kb
mount option to limit the size of directories, but no similar option for regular files.A process however can be prevented from creating a file bigger than a limit using limits as set by
setrlimit()
or theulimit
orlimit
builtin of some shells. Most systems will also let you set those limits system-wide, per user.When a process exceeds that limit, it receives a SIGXFSZ signal. And when it ignores that signal, the operation that would have caused that file size to be exceeded (like a
write()
ortruncate()
system call) fails with aEFBIG
error.To move that limit to the file system, one trick you could do is use a fuse (file system in user space) file system, where the user space handler is started with that limit set.
bindfs
is a good candidate for that.If you run
bindfs dir dir
(that is binddir
over itself), withbindfs
started as (zsh
syntax):Then any attempt to create a file bigger than 1M in that dir will fail.
bindfs
forwards theEFBIG
error to the process writing the file.Note that that limit only applies to regular files, that won't stop directories to grow past that limit (for instance by creating a large number of files in them).