I am repeating tens of thousands of similar operations in /dev/shm, each with a directory created, files written, and then removed. My assumption used to be that I was actually creating directories and removing them in place, so the memory consumption had to be quite low. However it turned out the usage was rather high, and finally caused memory overflow. So my questions is: with operations like
mkdir /dev/shm/foo
touch /dev/shm/foo/bar
[edit] /dev/shm/foo/bar
....
rm -rf /dev/shm/foo
Will it finally cause memory overflow? and if it does, why is that, since it seems to be removing them in-place.
Note: this is a tens of thousands similar operation.
Best Answer
Curious, as you're running this application what does
df -h /dev/shm
show your RAM usage to be?tmpfs
By default it's typically setup with 50% of whatever amount of RAM the system physically has. This is documented here on kernel.org, under the filesystem documentation for tmpfs. Also it's mentioned in the
mount
man page.excerpt from mount man page
confirmation
On my laptop with 8GB RAM I have the following setup for
/dev/shm
:What's going on?
I think what's happening is that in addition to being allocated 50% of your RAM to start, you're essentially consuming the entire 50% over time and are pushing your
/dev/shm
space into swap, along with the other 50% of RAM.Note that one other characteristic of
tmpfs
vs.ramfs
is thattmpfs
can be pushed into swap if needed:excerpt from geekstuff.com
At the end of the day it's a filesystem implemented in RAM, so I would expect it to act a little like both. What I mean by this is that as files/directories are deleted your're using some of the physical pages of memory for the inode table, and some for the actual space consumed by these files/directories.
Typically when you use space on a HDD, you don't actually free up the physical space, just the entries in the inode table, saying that the space consumed by a specific file is now available.
So from the RAM's perspective the space consumed by the files is just dirty pages in memory. So it will dutifully swap them out over time.
It's unclear if
tmpfs
does anything special to clean up the actual RAM used by the filesystem that it's providing. I saw mention in several forums that people saw that it was taking upwards of 15 minutes for their system to "reclaim" space for files that they had deleted in the/dev/shm
.Perhaps this paper I found on
tmpfs
titled: tmpfs: A Virtual Memory File System will shed more light on how it is implemented at the lower level and how it functions with respect to the VMM. The paper was written specifically for SunOS but might hold some clues.experimentation
The following contrived tests seem to indicate
/dev/shm
is able to clean itself up.experiment #1
Create a directory with a single file inside it, and then delete the directory 1000 times.
initial state of/dev/shm
fill it with files final state of/dev/shm
experiment #2
Create a directory with a single 50MB file inside it, and then delete the directory 300 times.
fill it with 50MB files of random garbage final state of/dev/shm
Again there was no noticable increase in the space consumed by
/dev/shm
.conclusion
I didn't notice any discernible effects with adding files and directories with my
/dev/shm
. Running the above multiple times didn't seem to have any effect on it either. So I don't see any issue with using/dev/shm
in the manner you've described.