Ext4: where did the free space go

ext4filesystemsmageia

I just upgraded my Mageia 2 64bits VM to Mageia 3, and now my virtual disk free space is almost nil, even though less than 100% of the space is used.

I am not an ext4 filesystem expert but I have extracted all relevant info that I could think of below:

Output of df -h / command (:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        11G  9.9G     0 100% /

Just in case, same output without the -h option:

Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sda1       10645080 10353564         0 100% /

Now with the -i option:

Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/sda1        670K  338K  332K   51% /

In my understanding, I should have either:
-11G – 9.9G = 1.1G free space give or take rounding, up to .1G error margin, or
-11G – 9.9G – .05 * 11G = 0.55G (to take reserved blocks percentage into account (5% in my case – default distribution option)

I am not even able to create any file.

Also, I have deleted between 300-500 MB of files since I first ran into this "no free space" problem but the free space stills shows 0, even though I have been denied creating any file since (logged as root)!
I have also rebooted system several times to account to deleted files that may still have been open while deleted, but got no change in reported free space.
Finally, since the first occurrence of the problem, uptime has been at most 12 hours and du -h /var/log gives 38M so no log madness here.

The first lines of dumpe2fs output give:

dumpe2fs 1.42.7 (21-Jan-2013)
Filesystem volume name:   <none>
Last mounted on:          /sysroot
Filesystem UUID:          45b03008-87fa-4684-a98a-bd5177e2b475
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              685440
Block count:              2737066
Reserved block count:     136853
Free blocks:              88348
Free inodes:              339326
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      668
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8160
Inode blocks per group:   510
Flex block group size:    16
Filesystem created:       Mon Aug 20 12:34:41 2012
Last mount time:          Wed May 29 14:29:19 2013
Last write time:          Wed May 29 14:09:03 2013
Mount count:              44
Maximum mount count:      -1
Last checked:             Mon Aug 20 12:34:41 2012
Check interval:           0 (<none>)
Lifetime writes:          66 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       46157
Default directory hash:   half_md4
Directory Hash Seed:      bc061f51-9d12-4851-a428-6fb59984118b
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x00015d7c
Journal start:            17427

Finally, I tried to toy with the reserved block counts just in case and here is the output:
Command: tune2fs -m 0 /dev/sda1 && /usr/bin/df / && tune2fs -m 5 /dev/sda1 && /usr/bin/df /

tune2fs 1.42.7 (21-Jan-2013)
Setting reserved blocks percentage to 0% (0 blocks)
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sda1       10645080 10353576    291504  98% /
tune2fs 1.42.7 (21-Jan-2013)
Setting reserved blocks percentage to 5% (136853 blocks)
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sda1       10645080 10353576         0 100% /

So apparently, by setting reserved blocks percentage to 0 I can recover about 285M but this doesn't match the worst case math including reserved blocks percentage as I understand it: 11G – 9.9G – .05 * 11G = 0.55G. Also doesn't tell me why after deleting 300~500M files I still can't create any file and free space shows as '0'. Remember that in any case, I'm logged in as root so I understand I should be able to use the reserved space (not a good idea but just as a principle).

Any clue what's happening here? Can it be related (and how) to the huge amount of disk changes in both data size and files creations/deletions occurred during the distribution upgrade, or the /usr migration in Mageia 3 (can't see why, as /usr is on same filesystem as / (not a separate partition)?

Best Answer

You're playing with reserved blocks count, and trying to control it via free space within df with rounded values.

10645080-10645080*0.05=10112826 blocks are available for normal user

and you have 10353576 blocks used, therefore it is normal.

Also df rounds values. you have 10645080 1K-blocks and that is rounded to 11G.

Really it is 10645080/1024/1024=10.15G, not 11G.

You can run df -BM to check size in MB, it's rounded with much less inaccuracy.

Related Question