Linux – Largest allowed maximum number of open files in Linux

fileslimitlinuxopen files

Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)?

I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).

Best Answer

I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources.

But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low.

In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096.

Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000. I've used an rlimit_nofile of 300,000 for certain applications.

As long as you keep the soft limit at the default (1024), it's probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit.

See also some related questions:

Related Question