The major disadvantage of using zram is LRU inversion:
older pages get into the higher-priority zram and quickly fill it, while newer pages are swapped in and out of the slower […] swap
The zswap documentation says that zswap does not suffer from this:
Zswap receives pages for compression through the Frontswap API and is able to
evict pages from its own compressed pool on an LRU basis and write them back to
the backing swap device in the case that the compressed pool is full.
Could I have all the benefits of zram and a completely compressed RAM by setting max_pool_percent
to 100
?
Zswap seeks to be simple in its policies. Sysfs attributes allow for one user controlled policy: * max_pool_percent - The maximum percentage of memory that the compressed pool can occupy.
No default max_pool_percent
is specified here, but the Arch Wiki page says that it is 20
.
Apart from the performance implications of decompressing, is there any danger / downside in setting max_pool_percent
to 100
?
Would it operate like using an improved swap-backed zram?
Best Answer
To answer your question, I first ran a series of experiments. The final answers are in bold at the end.
Experiments performed:
Setup before the experiment:
swappiness
value (60)dd
) but didn'tswapon
yetwatch "killall -9 dnf"
to be more sure that dnf won't try to auto-update during the experiment or something and throw the results off too farState before the experiment:
The subsequent swapon operations, etc., leading to the different settings during the experiments, resulted in variances of within about 2% of these values.
Experiment operation consisted of:
State after the experiment:
1) swap file, zswap disabled
2) swap file, zswap enabled, max_pool_percent = 20
3) swap file, zswap enabled, max_pool_percent = 70
4) swap file, zswap enabled, max_pool_percent = 100
5) zram swap, zswap disabled
6) zram swap, zswap enabled, max_pool_percent = 20
7) no swap
Note that firefox is not running in this experiment at the time of recording these stats.
8) swap file, zswap enabled, max_pool_percent = 1
9) swap file (300 M), zswap enabled, max_pool_percent = 100
Firefox was stuck and the system still read from disk furiously. The baseline for this experiment is a different since a new swap file has been written:
Specifically, extra 649384 sectors have been written as a result of this change.
State after the experiment:
Subtracting the extra 649384 written sectors from 2022272 results in 1372888. This is less than 1433000 (see later) which is probably because of firefox not loading fully.
I also ran a few experiments with low
swappiness
values (10 and 1) and they all got stuck in a frozen state with excessive disk reads, preventing me from recording the final memory stats.Observations:
max_pool_percent
values resulted in sluggishness.max_pool_percent
values result in the least amount of writes whereas very low value ofmax_pool_percent
results in the most number of writes.Written sectors as a direct consequence of swapping (approx.):
Extra read sectors as a direct consequence of swapping (approx.):
Interpretation of results:
zswap
but it is evidently not suited for this task.Personal opinions and anecdotes:
zswap
with the default values ofswappiness
andmax_pool_percent
always behaves better than anyswappiness
value and nozswap
orzswap
with high values ofmax_pool_percent
.)swappiness
values seem to make the system behave better until the amount of page cache left is so small as to render the system unusable due to excessive disk reads. Similar with too highmax_pool_percent
.zram
swap and limit the amount of anonymous pages you need to hold in memory, or use disk-backed swap withzswap
with approximately default values forswappiness
andmax_pool_percent
.EDIT: Possible future work to answer the finer points of your question would be to find out for your particular usecase how the the
zsmalloc
allocator used inzram
compares compression-wise with thezbud
allocator used inzswap
. I'm not going to do that, though, just pointing out things to search for in docs/on the internet.EDIT 2:
echo "zsmalloc" > /sys/module/zswap/parameters/zpool
switches zswap's allocator fromzbud
tozsmalloc
. Continuing with my test fixture for the above experiments and comparingzram
withzswap
+zsmalloc
, it seems that as long as the swap memory needed is the same as either azram
swap or aszswap
'smax_pool_percent
, the amount of reads and writes to disk is very similar between the two. In my personal opinion based on the facts, as long as the amount ofzram
swap I need is smaller than the amount ofzram
swap I can afford to actually keep in RAM, then it is best to use solelyzram
; and once I need more swap than I can actually keep in memory, it is best to either change my workload to avoid it or to disablezram
swap and usezswap
withzsmalloc
and setmax_pool_percent
to the equivalent of what zram previously took in memory (size ofzram
* compression ratio). I currently don't have the time to do a proper writeup of these additional tests, though.