According to cpulimit --help
:
-i, --include-children limit also the children processes
I have not tested whether this applies to children of children, nor looked into how this is implemented.
Alternatively, you could use cgroups
, which is a kernel feature.
Cgroups don't natively provide a means to limit child processes as well, but you can use the cg rules engine daemon (cgred) provided by libcgroup; the cgexec
and cgclassify
commands that come from the libcgroup package provide a --sticky
flag to make rules apply to child processes as well.
Be aware that there is a race condition involved which could (at least theoretically) result in some child processes not being restricted correctly. However, as you're currently using cpulimit
which runs in userspace anyway, you already don't have 100% reliable CPU limitations so this race condition shouldn't be a deal-breaker for you.
I wrote rather extensively about the cg rules engine daemon in my self-answer here:
The setrlimit(2) syscall is relevant to limit resources (CPU time -integral number of seconds, so at least 1 sec- with RLIMIT_CPU
, file size with RLIMIT_FSIZE
, address space with RLIMIT_AS
, etc...). You could also set up disk quotas. The wait4(2) syscall tells you -and gives feedback- about some resource usage. And proc(5) tells you a lot more, and also getrusage(2) (you might code some monitor which periodically stops the entire process group using SIGSTOP
, call getrusage
or query /proc/$PID/
, then send SIGCONT
-to continue- or SIGTERM
-to terminate- to that process group).
The valgrind tool is very useful on Linux to help finding memory leaks. And strace(1) should be helpful too.
If you can recompile the faulty software, you could consider -fsanitize=address
and -fsanitize=undefined
and other -fsanitize=
... options to a recent version of the GCC compiler.
Perhaps you have some batch processing. Look for batch monitors, or simply code your own thing in C, Python, Ocaml, Perl, .... (which forks the command, and loop on monitoring it...). Maybe you want some process accounting (see acct(5) & sa(8)...)
Notice that "amount of memory used" (a program generally allocates with mmap
& releases memory with munmap
to the kernel while running) and "CPU time" (see time(7), think of multi-threaded programs ...) are very fuzzy concepts.
See also PAM and configure things under /etc/security/
; perhaps inotify(7) might also be helpful (but probably not).
Read also Advanced Linux Programming and syscalls(2)
Best Answer
While it can be an abuse for memory, it isn't for CPU: when a CPU is idle, a running process (by "running", I mean that the process isn't waiting for I/O or something else) will take 100% CPU time by default. And there's no reason to enforce a limit.
Now, you can set up priorities thanks to
nice
. If you want them to apply to all processes for a given user, you just need to make sure that his login shell is run withnice
: the child processes will inherit thenice
value. This depends on how the users log in. See Prioritise ssh logins (nice) for instance.Alternatively, you can set up virtual machines. Indeed setting a per-process limit doesn't make much sense since the user can start many processes, abusing the system. With a virtual machine, all the limits will be global to the virtual machine.
Another solution is to set
/etc/security/limits.conf
limits; see the limits.conf(5) man page. For instance, you can set the maximum CPU time per login and/or the maximum number of processes per login. You can also setmaxlogins
to 1 for each user.