I'm fairly sure that the kernel reserves some memory for itself, i.e. for launching the oom_killer.
(What use would a oom_killer be if it fails to load due to lack of memory?)
Just disable the OOM Killer
for the particular process with:
for p in $(pidof kvm qemu-system32_x64); do
echo -n '-17' > /proc/$p/oom_adj
done
or by flavor oom_score adj
.
However:
Out of memory: Kill process 25086 (kvm) score 192 or sacrifice child
In your case is to set also to 192
.
See also Taming the OOM Killer
In any case, you should check also what causes the memory overflow, since the OOM Killer will kill other important processes.
Often it is observed a phenomenon called overtuning
. In this case the overcommit_memory
as described here.
Source proc filesystems:
oom_adj:
For backwards compatibility with previous kernels, /proc/<pid>/oom_adj may also
be used to tune the badness score. Its acceptable values range from -16
(OOM_ADJUST_MIN) to +15 (OOM_ADJUST_MAX) and a special value of -17
(OOM_DISABLE) to disable oom killing entirely for that task. Its value is
scaled linearly with /proc/<pid>/oom_score_adj.
oom_score_adj:
The value of /proc/<pid>/oom_score_adj is added to the badness score before it
is used to determine which task to kill. Acceptable values range from -1000
(OOM_SCORE_ADJ_MIN) to +1000 (OOM_SCORE_ADJ_MAX). This allows userspace to
polarize the preference for oom killing either by always preferring a certain
task or completely disabling it. The lowest possible value, -1000, is
equivalent to disabling oom killing entirely for that task since it will always
report a badness score of 0.
Best Answer
Consider this scenario:
If the process that got killed was the last process to request memory, your task manager would get killed.
Or:
Now your X server gets killed. It didn't cause the problem; it was just "in the wrong place at the wrong time". It happened to be the first process to allocate more memory when there was none left, but it wasn't the process that used all the memory to start with.