Seeing another of your post I guess you are using zram. So that will be my assumption here.
I did the experience to install zram and consume lot of memory, and I got the same output of smem
than you. smem
does not take into account zram
into its counting, it only uses /proc/meminfo
to compute its value, and if you look and try to understand the code you will see that the zram RAM occupation is gets in the end counted under the noncache column of the kernel dynamic memory line.
Further investigations
Following my gut feeling that zram was behind this behavious, I setted up a VM with similar spec as your machine: 4 GB RAM and 2 GB zram swap, no swap file.
I have loaded the VM with heavy weight applications and got the following state:
huygens@ubuntu:~$ smem -wt -K ~/vmlinuz-3.2.0-38-generic.unpacked -R 4096M
Area Used Cache Noncache
firmware/hardware 130717 0 130717
kernel image 13951 0 13951
kernel dynamic memory 1063520 922172 141348
userspace memory 2534684 257136 2277548
free memory 451432 451432 0
----------------------------------------------------------
4194304 1630740 2563564
huygens@ubuntu:~$ free -m
total used free shared buffers cached
Mem: 3954 3528 426 0 79 858
-/+ buffers/cache: 2589 1365
Swap: 1977 0 1977
As you can see free
reports 858 MB cache memory and that is also what smem
seems to report within the cached kernel dynamic memory.
Then I further stressed the system using Chromium Browser. At the beginning, it was only have 83 MB of swap used. But then after a few more tabs opened, the swap switch quickly to almost it's maximum and I experienced OOM! zram
has really a dangerous side where wrongly configured (too big sizes) it can quickly hit you back like a trebuchet-like mechanism.
At that time I had the following outputs:
huygens@ubuntu:~$ smem -wt -K ~/vmlinuz-3.2.0-38-generic.unpacked -R 4096M
Area Used Cache Noncache
firmware/hardware 130717 0 130717
kernel image 13951 0 13951
kernel dynamic memory 1355344 124072 1231272
userspace memory 961004 36456 924548
free memory 1733288 1733288 0
----------------------------------------------------------
4194304 1893816 2300488
huygens@ubuntu:~$ free -m
total used free shared buffers cached
Mem: 3954 2256 1698 0 4 132
-/+ buffers/cache: 2118 1835
Swap: 1977 1750 227
See how the kernel dynamic memory (columns cache and non-cache) look like inverted? It is because in the first case, the kernel had "cached" memory such as reported by free
but then it had swap memory held by zram
which smem
does not know how to compute (check smem source code, zram occupation is not reported in /proc/meminfo, this it is not computed by smem
which does simple "total kernel mem" - "type of memory reported by meminfo that I know are cache", what it does not know is that in the computed total kernel mem it has added the size of the swap which is in RAM!)
When I was in this state, I activated a hard disk swap and turned off the zram swap and I reset the zram devices: echo 1 > /sys/block/zram0/reset
.
After that the noncache kernel memory melted like snow in summer and returned to "normal" value.
Conclusion
smem
does not know about zram
(yet) maybe because it is still staging and thus not part of /proc/meminfo
which reports global parameters (like (in)active pages size, total memory) and then only report on a few specific parameters. smem
identified a few of this specific parameters as "cache", sum them up and compare that to total memory. Because of that zram
used memory gets counted in the noncache column.
Note: by the way, in modern kernel, meminfo
reports also the shared memory consumed. smem
does not take that yet into account, so even without zram
the output of smem
is to consider carefully esp. if you use application that make big use of shared memory.
References used:
I'm not sure everything you need is exposed in /proc/meminfo
's output so that you can calculate MemTotal
yourself. From the Linux Kernel's documentation proc.txt
file:
excerpt
MemTotal: Total usable ram (i.e. physical ram minus a few reserved
bits and the kernel binary code)
dmesg
If you look through either the output of dmesg
or the log file /var/log/dmesg
you can find the following information:
$ grep -E "total|Memory:.*available" /var/log/dmesg
[ 0.000000] total RAM covered: 8064M
[ 0.000000] On node 0 totalpages: 2044843
[ 0.000000] Memory: 7970012k/9371648k available (4557k kernel code, 1192276k absent, 209360k reserved, 7251k data, 948k init)
I believe this information can be used to determine MemTotal
. This blog post covers it in more details, it's titled: Understanding “vmalloc region overlap”. Also this post, which provides some additional info, titled: Anatomy of a Program in Memory.
References
Best Answer
HardwareCorrupted
show the amount of memory in "poisoned pages", i.e. memory which has failed (as flagged by ECC typically). ECC stands for "Error Correcting Code". ECC memory is capable of correcting small errors and detecting larger ones; on typical PCs with non-ECC memory, memory errors go undetected. If an uncorrectable error is detected using ECC (in memory or cache, depending on the system's hardware support), then the Linux kernel marks the corresponding page as poisoned.DirectMap
is shown on x86, Book3s PowerPC, and S/390, and gives an indication of the TLB load, not memory use: it counts the number of pages mapped using the various supported page sizes on each platform (corresponding to different page table levels): 4KiB, 64KiB, 1MiB, 2MiB, 4MiB, 1GiB, or 2GiB pages. The TLB, or "Translation Lookaside Buffer", is a cache used to store mappings between virtual addresses (as seen by software running on your computer) and physical pages in memory (as seen by the hardware); the calculations and memory fetches involved to go from virtual to physical addresses are expensive, so caches are used to avoid needing them too often. But the TLB is small, so accessing a variety of different addresses (too many to stay in the cache) incurs a performance penalty. This penalty can be reduced by using larger pages; on the x86 architecture the traditional page size is 4KiB, but larger pages can be used when possible, and their sizes can be 2MiB, 4MiB or 1GiB.For more detail you can look up the Wikipedia links I've included, and follow the references from there.