I plan to run a Java app using nohup ... &
. The limit must apply to commands like this.
How to set a limit on the number of files Linux can have open
limitopen files
Related Solutions
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems.
As root user you can change the maximum of the open files count per process (via ulimit -n
) and per system (e.g. echo 800000 > /proc/sys/fs/file-max
).
Specifically for setrlimit
Here are some of the more useful command options that you may wish to look into; pulled'em from the man
pages.
RLIMIT_NOFILE
Specifies a value one greater than the maximum file descriptor number that can be opened by this process.
RLIMIT_NPROC
The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit
RLIMIT_SIGPENDING
Specifies the limit on the number of signals that may be queued for the real user ID of the calling process. Both standard and real-time signals are counted for the purpose of checking this limit.
There also seems to be other really cool limitations that can be set so I'm thankful I ran across your question as it has shown me yet another tool for keeping processes in check.
General Unix/Linux
I believe the general term of application limitation tool you are looking for is called a Sandbox
for UNIX it looks like Contractor and Passenger are solid options and for Linux I've seen Docker
, KVM
& Firejail
used on systems as constrained as the Raspberry Pi B+v2 or dule core netbooks. For most of the Sandboxing action you'll need a system and Kernel capible of Virtualization. On systems such as Android I've seen Selinux
used on the latest CyonagenMod ROMs, frustrating bit to get around if ya want to use a chroot app... but I digress, on some systems that I've run Ubuntu I've run across Apparmor
poping errors when a newly installed program tries to phone home with a persistent connection. Suffice it to say there's lot's of options for controlling what a specific program or set of programs may do, see, and or communicate with, and how much of the CPU's & GPU's resources maybe used.
The best out of the bunch if you can get it working (kinda iffy as I'm still working with the Dev. to get ARMhf binaries working), for your usage scenario, would be Firejail as the guide hosted on the Dev's home page covers a dual-gaming rig that could be modified to suit your needs. It has a low memory foot print in comparison to the others mentioned (from what I've seen that is) and is highly configurable as to what files a process has access to and whether or not persistence is allowed. This would be good for testing as you would have a set working environment that is repeatable, customizable, and ultimately deletable if needed.
For systems without full virtualization support I've seen that selinux is usually used to define stricter rules over the user
/group
permission settings that are already in place to keep read & write permissions. The term to search there is Linux name space permissions
, turns out there's lot's of hidden ways that one can restrict actions but the biggest hole for all these options is root
even in a well constructed chroot jail if there are ways to obtain root permissions within a jail or sandbox then there are ways to escalate into the user's ID that is running the jailed process.
Basically there should be multiple layers for a process to have to break out of, ie for a web server I'll be setting up a restrictive set of firewall rules, log readers to dynamically add rules and change firewall settings (fail2ban with custom actions and scripts), then a chroot jail that only has the required depends for a web server in it's directory structure bound to a port above 1024 such that it doesn't even request root level permissions for socket binding, and wrapping those inside a virtualized sandbox (likely with Firejail), that has a host running penetration detection mesures such as tripwire
and honeyd
within their own respective jails. All so that if .php
and similar code that should not be modified on the public server does receive a bad-touch it is ignored, back-ips resored and the offender banded from future access.
In your example code it doesn't look like you're doing much with networking but more than likely it will be called from another script or function and because it is obviously calling up child processes you'll want to figure out how to sanitize input, and catch errors at every step (look up the link that killed the Chrome browser for why), and ensure that unsanitized input is not read or inturprated by a privileged user (look up how to add shell-shock
to Firefox's browser ID for why), and if there is networking involved with calling or returning output then the ports that the process is bound to should be on an un-privileged port (use iptables/firewall for forwarding if it's a web app kinda thing). While there's a plethora of options for locking a system's services down to consider there also seems to be many options for testing code's breakability; Metasploit
and drone.io
are two fairly well known pentesting and code testing options that you may wish to look into before someone does it for you.
Best Answer
Most systems use PAM, and have the
pam_limits
module set limits based on/etc/security/limits.conf
. The per-user limit for open files is callednofile
. You can set it for every user or for a particular user or group, and you can set a limit that the user can override (soft limit) and another that only root can override (hard limit). The documentation and thelimits.conf
man page have the details. For example, to raise the limit to 50000 for everyone, put this line in/etc/limits.conf
(the setting takes effect when you log in):