Kernel is a bit of a misnomer. The Linux kernel is comprised of several proceses/threads + the modules (lsmod
) so to get a complete picture you'd need to look at the whole ball and not just a single component.
Incidentally mine shows slabtop
:
Active / Total Size (% used) : 173428.30K / 204497.61K (84.8%)
The man page for slabtop
also had this to say:
The slabtop statistic header is tracking how many bytes of slabs are being used and it not a measure of physical memory. The 'Slab' field in the /proc/meminfo file is tracking information about used slab physical memory.
Dropping caches
Dropping my caches as @derobert suggested in the comments under your question does the following for me:
$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
$
Active / Total Size (% used) : 61858.78K / 90524.77K (68.3%)
Sending a 3 does the following: free pagecache, dentries and inodes. I discuss this more in this U&L Q&A titled: Are there any ways or tools to dump the memory cache and buffer?". So 110MB of my space was being used by just maintaining the info regarding pagecache, dentries and inodes.
Additional Information
So how much RAM is my Kernel using?
This picture is a bit foggier to me, but here are the things that I "think" we know.
Slab
We can get a snapshot of the Slab usage using this technique. Essentially we can pull this information out of /proc/meminfo
.
$ grep Slab /proc/meminfo
Slab: 100728 kB
Modules
Also we can get a size value for Kernel modules (unclear whether it's their size from on disk or when in RAM) by pulling these values from /proc/modules
:
$ awk '{print $1 " " $2 }' /proc/modules | head -5
cpufreq_powersave 1154
tcp_lp 2111
aesni_intel 12131
cryptd 7111
aes_x86_64 7758
Slabinfo
Much of the details about the SLAB are accessible in this proc structure, /proc/slabinfo
:
$ less /proc/slabinfo | head -5
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
nf_conntrack_ffff8801f2b30000 0 0 320 25 2 : tunables 0 0 0 : slabdata 0 0 0
fuse_request 100 125 632 25 4 : tunables 0 0 0 : slabdata 5 5 0
fuse_inode 21 21 768 21 4 : tunables 0 0 0 : slabdata 1 1 0
Dmesg
When your system boots there is a line that reports memory usage of the Linux kernel just after it's loaded.
$ dmesg |grep Memory:
[ 0.000000] Memory: 7970012k/9371648k available (4557k kernel code, 1192276k absent, 209360k reserved, 7251k data, 948k init)
References
I believe you would want to use something like cgroups
to limit resource usage for a individual process.
So you might want to do something like this except with
cgcreate -g memory,cpu:chromegroup
cgset -r memory.limit_in_bytes=2048 chromegroup
to create chromegroup and restrict the memory usage for the group to 2048 bytes
cgclassify -g memory,cpu:chromegroup $(pidof chrome)
to move the current chrome processes into the group and restrict their memory usage to the set limit
or just launch chrome within the group like
cgexec -g memory,cpu:chromegroup chrome
However, it's pretty insane that chrome is using that much memory in the first place. Try purging reinstalling / recompiling first to see if that doesn't fix the issue, because it really should not be using that much memory to begin with, and this solution is only a band-aid over the real problem.
Best Answer
The short story:
If your mobo posts, and your system boots, and free/top show your ram as 16 gB, then it works. Even mobo makers can under-report capacity of system boards, so the real test is if ram is installed correctly, matched correctly, runs, ie, boots, and runs with stability, ie, doesn't crash, then it works. You can also test by trying to use all your memory for something or other, and seeing if the system remains stable. Because you got very good ram, crucial, it's quite possible that lower grade ram might not have worked at 16gB. That can be why they don't say it supports 16gB but opt for the more conservative 8gB.
Your tools, like free, top, that report real memory of system, are not lying, that is the usable memory the kernel has access to. Tools that read dmi data do lie, because dmi lies randomly based on the companies who filled that data out.
No, it is telling you the truth.
It says 8gB total. You can see it clearly when looking at a sample type 16 (mine, in this case). The capacity refers to the capacity of the array. This is a single memory array. This array has an alleged (though false in your case) capacity of 8gB (correct in my case), and in my case, it has 4 devices. In your case it has 2 devices. Note that you cannot deduce the overall capacity by the max stick you can use in one slot, unfortunately. That is, you could have 4 slots, with an 8gB capacity, but a 4gB per slot max, which would mean you could use either 4x2gG sticks, or 2x4gB, but not 4x4gB.
No, free is telling you the truth. top will tell you the same truth (though the question of what the kernel considers free or not free is highly arcane, and varies with the implementations of these tools, but that veers far off topic of this question). This is the kernel reporting to userland what ram it has access to, and what is used.
It depends on your system. And on how dmidecode is interpreting the data. I'm rusty on this part of the question.
The long story:
Since I had to deal pretty heavily with ram reporting issues, I had to discover the variance in quality of the dmidecode ram data reports. Note that this is NOT the fault of dmidecode, since its job is to report the dmi data, not to interpret it or correct it.
First: dmidecode essentially reports two sets of data: 1: some data that someone filled out, that is, a low paid drone at the motherboard vendor has a form to fill out, and either doesn't bother doing it right, or does it right for one model, and then just copies over that data to another. 2: real data, like whether a ram slot has ram in it, it's size, type, speed, etc.
So in the case of system board ram capacity, dmidecode is NOT telling you the capacities based on any actual technical specifications available to dmidecode when it runs. What it's doing is repeating the data that the aforementioned underpaid person was told to fill out to check some box prior to shipping the hardware.
Some mobo vendors supply this data perfectly, and you can fully trust their statements. Others offer completely nonsensical statements, which leads dmidecode to correctly report 4x2gB ram installed, but a capacity of 4gB.
For example, dmidecode will I believe almost always, if not always, tell you the exactly correct information about your installed ram, quite accurately, but the dmi data will then often contain wrong data about capacity.
When I had to deal with this issue, I always used the per stick reporting as authoritative, and I always let it override dmidecode data about actual capacity, because the latter is not real data.
Basically it depends on the motherboard vendor, did they complete the data fields that 5 and 16 use correctly? I'll give you an example that clearly shows the fields they didn't feel like filling out.
You see this all through dmi data, and inside of /sys, data that was not filled out, half filled out by the vendors, or filled out wrong. The items after speed were not filled out right. My personal favorite is this, which is far more common internally than you'd think:
You'd think that in this day and age there would be something that actually tells systems exactly what it is, but that's sadly not the case.
I could show you hundreds of instances of machine dmidecode data that demonstrate this issue, but really you only have to see one or two. I tend to think that the better mobo makers tend to fill out their dmi data sets better, and the lower end ones tend to not do that, but there's no hard and fast rule about it.
As a basic rule, this is the information you can trust from dmidecode and ram:
From Gilles, in comments:
The key is to realize that the max capacity that dmidecode reports the memory array having is not calculated, it's just some data someone entered when they create the dmi table for the mobo. I generally trust the vendor mobo documentation over the dmi data, but as this poster discovered, even that's not reliable.