Kernel is a bit of a misnomer. The Linux kernel is comprised of several proceses/threads + the modules (lsmod
) so to get a complete picture you'd need to look at the whole ball and not just a single component.
Incidentally mine shows slabtop
:
Active / Total Size (% used) : 173428.30K / 204497.61K (84.8%)
The man page for slabtop
also had this to say:
The slabtop statistic header is tracking how many bytes of slabs are being used and it not a measure of physical memory. The 'Slab' field in the /proc/meminfo file is tracking information about used slab physical memory.
Dropping caches
Dropping my caches as @derobert suggested in the comments under your question does the following for me:
$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
$
Active / Total Size (% used) : 61858.78K / 90524.77K (68.3%)
Sending a 3 does the following: free pagecache, dentries and inodes. I discuss this more in this U&L Q&A titled: Are there any ways or tools to dump the memory cache and buffer?". So 110MB of my space was being used by just maintaining the info regarding pagecache, dentries and inodes.
Additional Information
So how much RAM is my Kernel using?
This picture is a bit foggier to me, but here are the things that I "think" we know.
Slab
We can get a snapshot of the Slab usage using this technique. Essentially we can pull this information out of /proc/meminfo
.
$ grep Slab /proc/meminfo
Slab: 100728 kB
Modules
Also we can get a size value for Kernel modules (unclear whether it's their size from on disk or when in RAM) by pulling these values from /proc/modules
:
$ awk '{print $1 " " $2 }' /proc/modules | head -5
cpufreq_powersave 1154
tcp_lp 2111
aesni_intel 12131
cryptd 7111
aes_x86_64 7758
Slabinfo
Much of the details about the SLAB are accessible in this proc structure, /proc/slabinfo
:
$ less /proc/slabinfo | head -5
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
nf_conntrack_ffff8801f2b30000 0 0 320 25 2 : tunables 0 0 0 : slabdata 0 0 0
fuse_request 100 125 632 25 4 : tunables 0 0 0 : slabdata 5 5 0
fuse_inode 21 21 768 21 4 : tunables 0 0 0 : slabdata 1 1 0
Dmesg
When your system boots there is a line that reports memory usage of the Linux kernel just after it's loaded.
$ dmesg |grep Memory:
[ 0.000000] Memory: 7970012k/9371648k available (4557k kernel code, 1192276k absent, 209360k reserved, 7251k data, 948k init)
References
What sets the size of the tmpfs? (On my machine it resides in /dev/shm) I can see its entry in /etc/fstab, but no notation of its size.
The kernel documentation covers this underneath the mount options:
size: The limit of allocated bytes for this tmpfs instance. The default is half of your physical RAM without swap. If you oversize your tmpfs instances the machine will deadlock
(Emphasis mine)
Also, what happens if it gets full?
As referenced above if you've committed too much to tmpfs your machine will deadlock. Otherwise (if it's just reached its hard limit) it returns ENOSPC just like any other filesystem.
Finally, what takes priority into the memory tmpfs or applications? i.e., if I have tmpfs sufficiently full (like 40% of the physical memory) and I have programs that requires 70% of the physical memory, which one gets the priority?
It's similar to the contention between programs. The pages most used will tend to be in physical memory while the least used pages will tend to be swapped out.
If you need to ensure the pages are always in physical memory you can use ramfs which is similar but is of fixed size and doesn't swap.
Best Answer
If you mount a
tmpfs
instance with a percentage it will take the percent size of the systems physical ram. For instance, if you have 2gb of physical ram and you mount atmpfs
with 50%, yourtmpfs
will have a size of 1gb. In your scenario, you add physical ram to your system, let's say another 2gb, that your system has 4gb of physical ram. When mounting the tmpfs it will have a size of 2gb now.When mounting multiple instances of
tmpfs
each with 50% set, it will work. Ifboth tmpfs
instances were filled completely, the system will swap out the lesser used pages. If swap space is full too, you will haveNo space left on device
errors.Edit:
tmpfs
only uses the amount of memory that is taken, not the full 50%. So, if only 10mb of those 1gb are taken, yourtmpfs
instance only occupies those 10mb. It's not not reserved, it's dynmically. With multiple instances of 50%, the first one that need memory gets memory. The system swapps the lesser used pages, if 50% is occupied or not. Thetmpfs
instance is not aware of the fact whether it uses physical ram or swap space. You can mount atmpfs
of 100gb if you want and it will work.I assume that you shut the system down before adding ram. So the
tmpfs
is remounted at startup anyway. If you add ram while the system runs, you will fry the ram, the motherboard and most likely your hand. I can't really recommand that :-)Sources: