When I start two CPU-eating processes with different nice-level, e.g.
Process 1:
nice -19 sh -c 'while true; do :; done'
Process 2:
sh -c 'while :; do true; done'
(I changed the order of :
and true
just to tell the processes apart in outputs of ps
or top
),
the nice-level seems to be ignored and both use the same amount of CPU.
The output of top
is like
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
8187 <user> 39 19 21.9m 3.6m 45.8 0.0 0:20.62 R sh -c while true; do :; done
8188 <user> 20 0 21.9m 3.5m 45.6 0.0 0:20.23 R sh -c while :; do true; done
[...]
(of course, the %CPU
-values vary slightly from sample to sample, but in average they seem to equal).
top
shows that both processes run with different nice values, but still they seem to get same amount of CPU time.
Both commands were run by the same user from different terminals (both are login shells).
If they are run from the same terminal, they behave as expected: The nicer process makes way for the not-so-nice one.
What is the reason? How to make nice work globally on the whole machine?
It was different on that very machine some time before, where nice-values seemed to be honoured.
It is a single processor/ single core machine.
For information:
- Kernel: Version 4.4.5 (Arch Linux stock kernel);
uname -r
:4.4.5-1-ARCH
, -
/proc/cpuinfo
is:processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Solo CPU U3500 @ 1.40GHz stepping : 10 microcode : 0xa0c cpu MHz : 1400.000 cache size : 3072 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm tpr_shadow vnmi flexpriority bugs : bogomips : 2794.46 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:
Best Answer
Ah, it's not the systemd-logind feature where each user gets it's own cgroup. I think the change responsible here is older; they're just confusingly similar. (I searched "process group fair scheduling", thinking it might be something based on unix's "process groups" that I never really understand). Wikipedia: