It's perfectly okay to use some directory in /run
as long as you have the appropriate rights on it. In some modern distros, /tmp
is already a virtual file system in memory or a symlink to a directory inside /run
. If this is your case (you can check that in /etc/fstab
, or typing mtab
), you could use /tmp
as your temporary directory.
Also, don't get confused with the article from Debian. shm_*
functions are used to create shared memory segments for Inter-Process Communication. With those functions, you can share a fragment of memory between two or more processes to have them communicate or collaborate using the same data. The processes have the segment of memory attached in their own address space and can read and write there as usual. The kernel deals with the complexity. Those functions are not available as shell functions (and wouldn't be very useful in a shell context). For further information, have a look at man 7 shm_overview
. The point of the article is that no program should manage directly the pseudo-files representing shared segments, but instead use the appropriate functions to create, attach and delete shared memory segments.
Curious, as you're running this application what does df -h /dev/shm
show your RAM usage to be?
tmpfs
By default it's typically setup with 50% of whatever amount of RAM the system physically has. This is documented here on kernel.org, under the filesystem documentation for tmpfs. Also it's mentioned in the mount
man page.
excerpt from mount man page
The maximum number of inodes for this instance. The default is half of
the number of your physical RAM pages, or (on a machine with
highmem) the number of lowmem RAM pages, whichever is the lower.
confirmation
On my laptop with 8GB RAM I have the following setup for /dev/shm
:
$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.4M 3.9G 1% /dev/shm
What's going on?
I think what's happening is that in addition to being allocated 50% of your RAM to start, you're essentially consuming the entire 50% over time and are pushing your /dev/shm
space into swap, along with the other 50% of RAM.
Note that one other characteristic of tmpfs
vs. ramfs
is that tmpfs
can be pushed into swap if needed:
excerpt from geekstuff.com
Table: Comparison of ramfs and tmpfs
Experimentation Tmpfs Ramfs
--------------- ----- -----
Fill maximum space and continue writing Will display error Will continue writing
Fixed Size Yes No
Uses Swap Yes No
Volatile Storage Yes Yes
At the end of the day it's a filesystem implemented in RAM, so I would expect it to act a little like both. What I mean by this is that as files/directories are deleted your're using some of the physical pages of memory for the inode table, and some for the actual space consumed by these files/directories.
Typically when you use space on a HDD, you don't actually free up the physical space, just the entries in the inode table, saying that the space consumed by a specific file is now available.
So from the RAM's perspective the space consumed by the files is just dirty pages in memory. So it will dutifully swap them out over time.
It's unclear if tmpfs
does anything special to clean up the actual RAM used by the filesystem that it's providing. I saw mention in several forums that people saw that it was taking upwards of 15 minutes for their system to "reclaim" space for files that they had deleted in the /dev/shm
.
Perhaps this paper I found on tmpfs
titled: tmpfs: A Virtual Memory File System will shed more light on how it is implemented at the lower level and how it functions with respect to the VMM. The paper was written specifically for SunOS but might hold some clues.
experimentation
The following contrived tests seem to indicate /dev/shm
is able to clean itself up.
experiment #1
Create a directory with a single file inside it, and then delete the directory 1000 times.
initial state of /dev/shm
$ df -k /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3993744 5500 3988244 1% /dev/shm
fill it with files
$ for i in `seq 1 1000`;do mkdir /dev/shm/sam; echo "$i" \
> /dev/shm/sam/file$i; rm -fr /dev/shm/sam;done
final state of /dev/shm
$ df -k /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3993744 5528 3988216 1% /dev/shm
experiment #2
Create a directory with a single 50MB file inside it, and then delete the directory 300 times.
fill it with 50MB files of random garbage
$ start_time=`date +%s`
$ for i in `seq 1 300`;do mkdir /dev/shm/sam; \
dd if=/dev/random of=/dev/shm/sam/file$i bs=52428800 count=1 > \
/dev/shm/sam/file$i.log; rm -fr /dev/shm/sam;done \
&& echo run time is $(expr `date +%s` - $start_time) s
...
8 bytes (8 B) copied, 0.247272 s, 0.0 kB/s
0+1 records in
0+1 records out
9 bytes (9 B) copied, 1.49836 s, 0.0 kB/s
run time is 213 s
final state of /dev/shm
Again there was no noticable increase in the space consumed by /dev/shm
.
$ df -k /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3993744 5500 3988244 1% /dev/shm
conclusion
I didn't notice any discernible effects with adding files and directories with my /dev/shm
. Running the above multiple times didn't seem to have any effect on it either. So I don't see any issue with using /dev/shm
in the manner you've described.
Best Answer
Although I don't think it's causing the problem here, your
fstab
entry is not 100% complete - you're missing thedefaults
in the mount options field.It should read:
That said, you will also need to change an init script for the
fstab
entry to take effect. See this bug report for more information, but basically you need to change/etc/rc.d/rc.sysinit
from
to
or add
mount -o remount tmpfs
to/etc/rc.local
.Note: Based on the age of the question, I'm assuming RHEL 6.x-aged Oracle Linux is the distribution in use.