Linux – Do tmpfs and devtmpfs share the same memory region

devicesfilesystemslinuxlinux-kerneltmpfs

My system disk usage is like this:

# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   50G   39G   12G  77% /
devtmpfs               5.8G     0  5.8G   0% /dev
tmpfs                  5.8G  240K  5.8G   1% /dev/shm
tmpfs                  5.8G   50M  5.8G   1% /run
tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/rhel-home  1.3T  5.4G  1.3T   1% /home
/dev/sda2              497M  212M  285M  43% /boot
/dev/sda1              200M  9.5M  191M   5% /boot/efi
tmpfs                  1.2G   16K  1.2G   1% /run/user/1200
tmpfs                  1.2G   16K  1.2G   1% /run/user/1000
tmpfs                  1.2G     0  1.2G   0% /run/user/0

I have 2 questions about devtmpfs and tmpfs:
(1)

devtmpfs               5.8G     0  5.8G   0% /dev
tmpfs                  5.8G  240K  5.8G   1% /dev/shm
tmpfs                  5.8G   50M  5.8G   1% /run
tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup

All the above spaces are 5.8G, do they share the same memory space?

(2)

tmpfs                  1.2G   16K  1.2G   1% /run/user/1200
tmpfs                  1.2G   16K  1.2G   1% /run/user/1000
tmpfs                  1.2G     0  1.2G   0% /run/user/0

Does each user has his dedicated memory space, not shared space in /run/user partition?

Best Answer

For all the tmpfs mounts, "Avail" is an artificial limit. The default size for tmpfs mounts is half your RAM. It can be adjusted at mount time. (man mount, scroll to tmpfs).

The mounts don't share the same space, in that if you filled the /dev/shm mount, /dev would not show any more "Used", and it would not necessarily stop you from writing data to /dev

(Someone could contrive tmpfs mounts that share space by bind-mounting from a single tmpfs. But that's not how any of these mounts are set up by default).

They do share the same space, in that they're both backed by the system memory. If you tried to fill both /dev/shm and /dev, you would be allocating space equal to your physical RAM. Assuming you have swap space, this is entirely possible. However it's generally not a good idea and would end poorly.


This doesn't fit well with the idea of having multiple user-accessible tmpfs mounts. I.e. /dev/shm + /tmp on many systems. It arguably would be better if the two large mounts shared the same space. (Posix SHM is literally an interface to open files on a user-accessible tmpfs).

/dev/, /run, /sys/fs/cgroups are system directories. They should be tiny, not used for sizeable data and so not cause a problem. Debian (8) seems to be a bit better at setting limits for them; on a 500MB system I see them limited to 10, 100, 250 MB, and another 5 for /run/lock respectively.

/run has about 2MB used on my systems. systemd-journal is a substantial part of it, and by default may grow to 10% of "Avail". (RuntimeMaxUse option), which doesn't fit my model.

I would bet that's why you've got 50MB there. Allowing the equivalent of 5% of physical RAM for log files... personally it's not a big problem in itself, but it's not pretty and I'd call it a mistake / oversight. It would be better if a cap was set on the same order as that 2MB mark.

At the moment it suggests the size for /run should be manually set for every system, if you want to prevent death by a thousand bloats. Even 2% (from my Debian example) seems presumptuous.

Related Question