There are two reasons this is true.
First, for some reason or another OS writers still report free space in terms of a base 2 system, and hard drive manufacturers reports free space in terms of a base 10 system. For example, an OS writer will call 1024 bytes (2^10 bytes) a kilobyte, and a hard drive manufacture would call 1000 bytes a kilobyte. This difference is pretty minor for kilobytes, but once you get up to terabytes, it's pretty significant. An OS writer will call 1099511627776 bytes (2^40 bytes) a terabyte, and a hard-drive manufacturer will call 1000000000000 bytes a terabyte.
These two different ways of talking about sizes frequently leads to a lot of confusion.
There is a spottily supported ISO prefix for binary sizes. User interfaces that are designed with the new prefix in mind will show TiB, GiB (or more generally XiB) when showing sizes with a base 2 prefix system.
Secondly, df -h reports how much space is available for your use. All filesystems have to write housekeeping information to keep track of things for you. This information takes up some of the space on your drive. Not generally very much, but some. That also accounts for some of the seeming loss you're seeing.
After you've edited your post to make it clear that none of my answers actually answer your question, I will take a stab at answering your question...
Different filesystems use different amounts of space for housekeeping information and report that space usage in different ways.
For example, ext2 divides the disk up into cylinder groups. Then it pre-allocates space in each cylinder group for inodes and free space maps. ext3 does the same thing since it's basically it's ext2 + journaling. And ext4 also does the exact same thing since it's a fairly straightforward (and almost backwards compatible) modification of ext3. And since this meta-data overhead is fixed on filesystem creation or on resize, it's not reported as 'used' space. I suspect this is also because the cylinder group meta-data is at fixed places on the disk, and so is simply implied as being used and hence not marked off or accounted for in free-space maps.
But reiserfs does not pre-allocate any metadata of any kind. It has no inode limit that's fixed on filesystem creation because it allocates all of its inodes on-the-fly like it does with data blocks. It, at most, needs some structures describing the root directory and a free space map of some sort. So it uses much less space when it has nothing in it.
But this means that reiserfs will take up more space as you add files because it will be allocating meta-data (like inodes) as well as the actual data space for the file.
I do not know exactly how jfs and btrfs track meta-data space usage. But I suspect they track it more like reiserfs does. vfat in particular has no inode concept at all. Its free space map (the size of which is fixed at filesystem create (the infamous FAT table)) stores much of the data an inode would, and the directory entry (which is dynamically allocated) stores the rest.
From man df
:
Display values are in units of the first available SIZE from --block-size, and the DF_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables. Otherwise, units default to 1024 bytes (or 512 if POSIXLY_CORRECT is set).
SIZE may be (or may be an integer optionally followed by) one of following: KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G, T, P, E, Z, Y.
The -k
switch is equal to --block-size=1K
, which means that the numbers are in multiples of 1024 bytes.
Best Answer
97675600 / 1825182752 = 5%
this is the default reserved space for root user.