I am not sure if this answers your question, but I found this perl script that claims to do exactly what you are looking for. The script implements its own system for enforcing the limits by waking up and checking the resource usage of the process and its children. It seems to be well documented and explained, and has been updated recently.
As slm said in his comment, cgroups can also be used for this. You might have to install the utilities for managing cgroups, assuming you are on Linux you should look for libcgroups
.
sudo cgcreate -t $USER:$USER -a $USER:$USER -g memory:myGroup
Make sure $USER
is your user.
Your user should then have access to the cgroup memory settings in /sys/fs/cgroup/memory/myGroup
.
You can then set the limit to, lets say 500 MB, by doing this:
echo 500000000 > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
Now lets run Vim:
cgexec -g memory:myGroup vim
The vim process and all its children should now be limited to using 500 MB of RAM. However, I think this limit only applies to RAM and not swap. Once the processes reach the limit they will start swapping. I am not sure if you can get around this, I can not find a way to limit swap usage using cgroups.
Specifically for setrlimit
Here are some of the more useful command options that you may wish to look into; pulled'em from the man
pages.
RLIMIT_NOFILE
Specifies a value one greater than the maximum file descriptor number that can be opened by this process.
RLIMIT_NPROC
The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit
RLIMIT_SIGPENDING
Specifies the limit on the number of signals that may be queued for the real user ID of the calling process. Both standard and real-time signals are counted for the purpose of checking this limit.
There also seems to be other really cool limitations that can be set so I'm thankful I ran across your question as it has shown me yet another tool for keeping processes in check.
General Unix/Linux
I believe the general term of application limitation tool you are looking for is called a Sandbox
for UNIX it looks like Contractor and Passenger are solid options and for Linux I've seen Docker
, KVM
& Firejail
used on systems as constrained as the Raspberry Pi B+v2 or dule core netbooks. For most of the Sandboxing action you'll need a system and Kernel capible of Virtualization. On systems such as Android I've seen Selinux
used on the latest CyonagenMod ROMs, frustrating bit to get around if ya want to use a chroot app... but I digress, on some systems that I've run Ubuntu I've run across Apparmor
poping errors when a newly installed program tries to phone home with a persistent connection. Suffice it to say there's lot's of options for controlling what a specific program or set of programs may do, see, and or communicate with, and how much of the CPU's & GPU's resources maybe used.
The best out of the bunch if you can get it working (kinda iffy as I'm still working with the Dev. to get ARMhf binaries working), for your usage scenario, would be Firejail as the guide hosted on the Dev's home page covers a dual-gaming rig that could be modified to suit your needs. It has a low memory foot print in comparison to the others mentioned (from what I've seen that is) and is highly configurable as to what files a process has access to and whether or not persistence is allowed. This would be good for testing as you would have a set working environment that is repeatable, customizable, and ultimately deletable if needed.
For systems without full virtualization support I've seen that selinux is usually used to define stricter rules over the user
/group
permission settings that are already in place to keep read & write permissions. The term to search there is Linux name space permissions
, turns out there's lot's of hidden ways that one can restrict actions but the biggest hole for all these options is root
even in a well constructed chroot jail if there are ways to obtain root permissions within a jail or sandbox then there are ways to escalate into the user's ID that is running the jailed process.
Basically there should be multiple layers for a process to have to break out of, ie for a web server I'll be setting up a restrictive set of firewall rules, log readers to dynamically add rules and change firewall settings (fail2ban with custom actions and scripts), then a chroot jail that only has the required depends for a web server in it's directory structure bound to a port above 1024 such that it doesn't even request root level permissions for socket binding, and wrapping those inside a virtualized sandbox (likely with Firejail), that has a host running penetration detection mesures such as tripwire
and honeyd
within their own respective jails. All so that if .php
and similar code that should not be modified on the public server does receive a bad-touch it is ignored, back-ips resored and the offender banded from future access.
In your example code it doesn't look like you're doing much with networking but more than likely it will be called from another script or function and because it is obviously calling up child processes you'll want to figure out how to sanitize input, and catch errors at every step (look up the link that killed the Chrome browser for why), and ensure that unsanitized input is not read or inturprated by a privileged user (look up how to add shell-shock
to Firefox's browser ID for why), and if there is networking involved with calling or returning output then the ports that the process is bound to should be on an un-privileged port (use iptables/firewall for forwarding if it's a web app kinda thing). While there's a plethora of options for locking a system's services down to consider there also seems to be many options for testing code's breakability; Metasploit
and drone.io
are two fairly well known pentesting and code testing options that you may wish to look into before someone does it for you.
Best Answer
The RLIMIT_NOFILE is on the maximum file descriptor value you may obtain/allocate, not on how many files may be open at a time.
Child processes inherit the limit, but other than that there's nothing that a child may do to influence the parent here. If the parent has some free fds in the range 0->limit-1, then it will still be able to open new files (wrt to that limit) regardless of what any of its children does (you may run into other global limits though).
In any case, note that if the limit is say 500, you can still have more than 500 file descriptors open if you had some that were open (including in parent processes) prior to the limit being lowered.
That process running
ls
has a limit of 500 there inherited from its parent (so can't get a new fd bigger than 499). Still it does have the fd 1023 open.