The Mem: total
figure is the total amount of RAM that can be used by applications. This is the total RAM installed on the system, minus:
- memory reserved by hardware devices (often video memory if the graphics card doesn't have its own RAM);
- memory used by the kernel itself.
That total includes:
free
: memory that is currently used for any purpose;
shared
: a concept that no longer exists. It's left in the output for backward compatibility (there are scripts that parse the output from free
). (On current systems you'll typically see nonzero values because shared
has been repurposed to show memory that's explicitly shared via a shared memory mechanism. On older systems, it included files mapped by more than one process and shareable memory that remained shared after fork()
.)
buffers
: memory that is backed by files, and that can be written out to disk if needed;
cache
: memory that is backed by files, and that can be reclaimed at any time (the difference with buffers
is that buffers must be saved to disk before they're reused, whereas cache consists of things that can be reloaded from disk);
used -buffers/cache
: memory used by applications (and not paged out to swap).
In a pinch, the system could run without buffers and cache, reserving RAM for applications and systematically performing disk reads and writes without any caching. The -/+ buffers/cache
figures indicate the amount of RAM used directly by applications (used
column) and the amount of RAM not used by applications (free
column).
Although this can vary a lot, a healthy system typically has around half its RAM devoted to applications and half devoted to buffers and cache. Unless you're running a dedicated file server, your system has more RAM than it needs for what you're currently doing. If the free - buffers/cache
figure was low, that would indicate a system that doesn't have enough RAM (contrary to a widespread belief, having a lot of memory devoted to buffers and cache is important for system performance, and trying to reserve more memory for applications would make 99.99% of systems slower).
The swap
line is straightforward, it shows the amount of swap that's in use (either by applications or for tmpfs
storage), and the amount that isn't.
The problem
4 GB of RAM (physical memory) and that you have 2 zram device of maximum 2,025,976 kB (roughly 2 GB each). zram is using the available memory, I do not know exactly the internal but whatever the mechanism I can clearly imagine a scenario where Linux page out (= put some memory from the RAM to zram) to get some more free space but then the zram usage in memory is growing, so it would further page out, which would result in further increase of zram usage, and so on until zram is consuming all your physical memory.
I guess there is a threshold on any system under which the paging out won't stress the kernel to the point I describe above, so that zram improve performance.
Insights
When your system wants to swap 100 MB, what happens is that it puts this 100 MB in zram. Let's say it gets compressed to 50% less, so 50 MB. It means that your system wanted to free 100 MB but only 50 MB got freed. Now Linux is clever in that when it has paged out chunk of memory (so put them in the swap) but need them again, it can do some "optimisation", it can page in again this memory but keep it in the swap as well, so if quickly after it would need to page out these part of the memory it could avoid an expenive write to the swap file. So in your case, it could be that Linux keeps the 100 MB in zram and put them back in normal RAM, so the system consumes 150 MB for awhile. If this is repeated for bigger program with less compressible data, this could quickly become a nightmare, imagine a 300 MB chunk of RAM that would be paged out, and use 120 MB in each zram swap. It means that Linux wanted to free 300 MB of the RAM for other purpose, but has only freed (300-120-120=60) 60 MB, it might then try to page out further pages, and so on, with the problem that you have 2 zram that can use up to 2 GB of RAM each, thus eating all your memory.
Conclusion and solution
So is zram crap? No, not at all, the problem is that you configured zram to have a total size of exactly your physical RAM and that's the problem. You should not configure zram to use more than 25% IMHO of your physical RAM, which means you would have to rely still in a hard disk swap solution once zram swap is filled up.
A simple solution would be to reduce both zram to handle each 500 MB max and add a swap file of roughly 2-3 GB, to allow the kernel to free really unused pages from zram to this swap file. The swap file won't use the RAM and dimish the pressure on it.
Some information on how to set your zram disk size.
Best Answer
lsmem
lists memory blocks and their state; these reflect physical memory and are counted in units of memory blocks, i.e. 128MiB on your system. To do this,lsmem
reads information made available by the kernel in/sys/devices/system/memory
. On your system, the kernel tracks 64 memory blocks for a total of 8GiB.free
lists memory that’s usable by the system; “total” is the amount of physical memory, minus memory reserved by the system (for the firmware’s purposes mostly) and the kernel’s executable code.free
reads this information from/proc/meminfo
.The difference in output is explained by this difference in what is measured. In all cases,
free
’s total memory will be smaller thanlsmem
’s total online memory.