Linux – Is setting ulimit -v sufficient to avoid memory leak

linuxrhelulimit

We have a process that in recent weeks had a once-off memory leak that resulted in it consuming all memory on our RHEL 7 box

We now wish to set limits around this such that it will never take any more than a certain amount

We are using the ulimit -v setting to set this amount (as the -m setting does not work)

Therefore, I'm wondering if this is sufficient or do we also need a way to limit physical memory as well? IF so, what is the best way to go about this?

If virtual memory always grows with phyiscal memory then perhaps -v by itself is sufficient

Best Answer

Some description about how ulimit works:

ulimit has deal with setrlimit and getrlimit system calls. It's easy to ensure by strace-ing of bash process (ulimit is component of the bash). I set 1024kb of max memory size:

$ ulimit -m 1024

In another console:

$ strace -p <my_bash_pid>
. . .
getrlimit(RLIMIT_RSS, {rlim_cur=1024*1024, rlim_max=1024*1024}) = 0
setrlimit(RLIMIT_RSS, {rlim_cur=1024*1024, rlim_max=1024*1024}) = 0
. . . 

setrlimit man page write next about RLIMIT_RSS:

RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.

madvice syscall is just advice to kernel and kernel may ignore this advice. Even bash man page about ulimit write following:

-m The maximum resident set size (many systems do not honor this limit)

That is the reason why -m doesn't work.

About -v option:

I set 1024 kb of virtual memory:

$ ulimit -v 1024

In another console:

$ strace -p <my_bash_pid>
. . .
getrlimit(RLIMIT_AS, {rlim_cur=RLIM64_INFINITY, rlim_max=RLIM64_INFINITY}) = 0
setrlimit(RLIMIT_AS, {rlim_cur=1024*1024, rlim_max=1024*1024}) = 0
. . .

setrlimit man page write next about RLIMIT_AS:

RLIMIT_AS The maximum size of the process's virtual memory (address space) in bytes. This limit affects calls to brk(2), mmap(2) and mremap(2), which fail with the error ENOMEM upon exceeding this limit. Also automatic stack expansion will fail (and generate a SIGSEGV that kills the process if no alternate stack has been made available via sigaltstack(2)). Since the value is a long, on machines with a 32-bit long either this limit is at most 2 GiB, or this resource is unlimited.

Program consist of 3 segments (data, code, stack) compose virtual program memory space.

  • Code segment is const and contain program instructions.

  • Data segment is controlled by following:

    brk syscall adjusts size of data segment (part of virtual memory) of the program.

    mmap syscall maps file or device to virtual memory of process.

    Many programs allocates memory (direct or indirect) by calling standard function from C Library (malloc) which allocates memory from heap (part of data segment). malloc adjust size of data segment by calling brk syscall.

  • Stack stores functions variables (variable takes memory during allocation from stack).

So, that's why the -v option is works for you.

If -v is sufficient for your task, then there is no reasons to do something else and it's sufficient.


If you want to take control about huge count of specific memory features for process (memory pressure, swap usage, RSS limit, OOM and so on) I suggest to you to use cgroups memory capabilities.

If your application is a service I suggest to you to use systemd slice features, as the most convenient for controlling and limiting resources of service or group of services (also it easy to configure instead of configuring cgroups directly) which is managed by systemd.

Related Question