First off, a 32 bit system has 0xffffffff
(4'294'967'295
) linear addresses to access a physical location ontop of the RAM.
The kernel divides these addresses into user and kernel space.
User space (high memory) can be accessed by the user and, if necessary, also by the kernel.
The address range in hex and dec notation:
0x00000000 - 0xbfffffff
0 - 3'221'225'471
Kernel space (low memory) can only be accessed by the kernel.
The address range in hex and dec notation:
0xc0000000 - 0xffffffff
3'221'225'472 - 4'294'967'295
Like this:
0x00000000 0xc0000000 0xffffffff
| | |
+------------------------+----------+
| User | Kernel |
| space | space |
+------------------------+----------+
Thus, the memory layout you saw in dmesg
corresponds to the mapping of linear addresses in kernel space.
First, the .text, .data and .init sequences which provide the initialization of the kernel's own page tables (translate linear into physical addresses).
.text : 0xc0400000 - 0xc071ae6a (3179 kB)
The range where the kernel code resides.
.data : 0xc071ae6a - 0xc08feb78 (1935 kB)
The range where the kernel data segments reside.
.init : 0xc0906000 - 0xc0973000 ( 436 kB)
The range where the kernel's initial page tables reside.
(and another 128 kB for some dynamic data structures.)
This minimal address space is just large enough to install the kernel in the RAM and to initialize its core data structures.
Their used size is shown inside the parenthesis, take for example the kernel code:
0xc071ae6a - 0xc0400000 = 31AE6A
In decimal notation, that's 3'255'914
(3179 kB).
Second, the usage of kernel space after initialization
lowmem : 0xc0000000 - 0xf77fe000 ( 887 MB)
The lowmem range can be used by the kernel to directly access physical addresses.
This is not the full 1 GB, because the kernel always requires at least 128 MB of linear addresses to implement noncontiguous memory allocation and fix-mapped linear addresses.
vmalloc : 0xf7ffe000 - 0xff7fe000 ( 120 MB)
Virtual memory allocation can allocate page frames based on a noncontiguous scheme. The main advantage of this schema is to avoid external fragmentation, this is used for swap areas, kernel modules or allocation of buffers to some I/O devices.
pkmap : 0xff800000 - 0xffa00000 (2048 kB)
The permanent kernel mapping allows the kernel to establish long-lasting mappings of high-memory page frames into the kernel address space. When a HIGHMEM page is mapped using kmap(), virtual addresses are assigned from here.
fixmap : 0xffc57000 - 0xfffff000 (3744 kB)
These are fix-mapped linear addresses which can refer to any physical address in the RAM, not just the last 1 GB like the lowmem addresses. Fix-mapped linear addresses are a bit more efficient than their lowmem and pkmap colleagues.
There are dedicated page table descriptors assigned for fixed mapping, and mappings of HIGHMEM pages using kmap_atomic are allocated from here.
If you want to dive deeper into the rabbit hole:
Understanding the Linux Kernel
The "memory used by a process" is not a clear cut concept in modern operating systems. What can be measured is the size of the address space of the process (SIZE) and resident set size (RSS, how many of the pages in the address space are currently in memory). Part of RSS is shared (most processes in memory share one copy of glibc, and so for assorted other shared libraries; several processes running the same executable share it, processes forked share read-only data and possibly a chunk of not-yet-modified read-write data with the parent). On the other hand, memory used for the process by the kernel isn't accounted for, like page tables, kernel buffers, and kernel stack. In the overall picture you have to account for the memory reserved for the graphics card, the kernel's use, and assorted "holes" reserved for DOS and other prehistoric systems (that isn't much, anyway).
The only way of getting an overall picture is what the kernel reports as such. Adding up numbers with unknown overlaps and unknown left outs is a nice exercise in arithmetic, nothing more.
Best Answer
Seeing another of your post I guess you are using zram. So that will be my assumption here.
I did the experience to install zram and consume lot of memory, and I got the same output of
smem
than you.smem
does not take into accountzram
into its counting, it only uses/proc/meminfo
to compute its value, and if you look and try to understand the code you will see that the zram RAM occupation is gets in the end counted under the noncache column of the kernel dynamic memory line.Further investigations
Following my gut feeling that zram was behind this behavious, I setted up a VM with similar spec as your machine: 4 GB RAM and 2 GB zram swap, no swap file.
I have loaded the VM with heavy weight applications and got the following state:
As you can see
free
reports 858 MB cache memory and that is also whatsmem
seems to report within the cached kernel dynamic memory.Then I further stressed the system using Chromium Browser. At the beginning, it was only have 83 MB of swap used. But then after a few more tabs opened, the swap switch quickly to almost it's maximum and I experienced OOM!
zram
has really a dangerous side where wrongly configured (too big sizes) it can quickly hit you back like a trebuchet-like mechanism.At that time I had the following outputs:
See how the kernel dynamic memory (columns cache and non-cache) look like inverted? It is because in the first case, the kernel had "cached" memory such as reported by
free
but then it had swap memory held byzram
whichsmem
does not know how to compute (check smem source code, zram occupation is not reported in /proc/meminfo, this it is not computed bysmem
which does simple "total kernel mem" - "type of memory reported by meminfo that I know are cache", what it does not know is that in the computed total kernel mem it has added the size of the swap which is in RAM!)When I was in this state, I activated a hard disk swap and turned off the zram swap and I reset the zram devices:
echo 1 > /sys/block/zram0/reset
.After that the noncache kernel memory melted like snow in summer and returned to "normal" value.
Conclusion
smem
does not know aboutzram
(yet) maybe because it is still staging and thus not part of/proc/meminfo
which reports global parameters (like (in)active pages size, total memory) and then only report on a few specific parameters.smem
identified a few of this specific parameters as "cache", sum them up and compare that to total memory. Because of thatzram
used memory gets counted in the noncache column.Note: by the way, in modern kernel,
meminfo
reports also the shared memory consumed.smem
does not take that yet into account, so even withoutzram
the output ofsmem
is to consider carefully esp. if you use application that make big use of shared memory.References used: