Linux – How safe is it to increase tmpfs to more than physical memory

linuxswaptmpfs

My server has 2GB RAM and 120GB SSD, plus some RAID arrays for storage. OS is Debian 8 (Linux 3.16).

I have a MySQL intense application that has tmpdir = /run/mysqld, which is a tmpfs configured by Debian through /etc/default/tmpfs:

# Size limits.  Please see tmpfs(5) for details on how to configure
# tmpfs size limits.
TMPFS_SIZE=40%VM

This used to be 20%VM, which is about 384M. I ran against several no space left on device, so I've increased it to 40%VM, but even with about 763M it is still too small.

Now I now I should add more RAM, but out of curiosity, I'd like to know the limits here.

  • /dev/sdd1 is mounted on / has about 50GB free, and is fairly fast (Samsung 850 EVO SSD)
  • /dev/sdd5 is my swap partition, it is 3.7G (fdisk type ID is 82)
  • TMPFS_SIZE is set to 40%VM, meaning /run is 763M

Now I know that tmpfs can swap, which is fine with me. I want MySQL to write to RAM whenever possible, but if it needs more memory, I can allow the system to swap it on the SSD.

So with my setup, can I push /run to be:

  • 300M big? Yes. That was the default.
  • 1.5GB big? Yes, tried, MySQL used up to 1.3GB on it and the system worked like a charm. But that's still less than half physical memory + swap partition.
  • 2.5GB big? This is more than physical memory, but less than half the physical memory + my swap partition.
  • 4GB big? This would tightly fit in half physical + swap
  • More? like 10GB? can it use free space on / to swap more?

I'm guessing the rule of thumb for safety is to have TMPFS_SIZE not larger than swap + half physical memory. Can I go beyond this without increasing the swap partition?

Also, is it possible to put 200%VM in /etc/default/tmpfs? I've read tmpfs(5) without knowing if I can put >100% on this.

Last, should I do it in /etc/fstab instead and don't touch /etc/default/tmpfs?

(up to know I've only done it with mount -o remount, I did not reboot the server yet)

Edit: for the last question, I know it can/may be modified by /etc/fstab (see quote below from man page), however I wanted to know the best practice, because I've never touched anything in /etc/default so far.

More complex mount options may be used by the creation of a suitable entry in /etc/fstab.

Best Answer

I figured I could just test it, so I ran:

sudo mount -o remount,size=2800M /run

Worked like a charm:

Filesystem      Size  Used Avail Use% Mounted on
tmpfs           2.8G   45M  2.7G   2% /run

So I filled it a bit:

fallocate -l 1G /run/test.img
fallocate -l 1G /run/test2.img
fallocate -l 500M /run/test3.img

Result:

Filesystem      Size  Used Avail Use% Mounted on
tmpfs           2.8G  2.6G  208M  93% /run

System is still up and running. Swap availability dropped, which proves it was used:

swap availability drop

  • 17:10: create 2.5 GB of files in /run
  • 17:20: remove the 500M file

Total swap is reduced by the amount taken by /run.

I'd test 10GB on a VM, because I don't know if the kernel will refuse the remount or just have an unexpected behavior.

I'm still looking for an actual answer, but the pragmatic way showed it works.

Related Question