What are the dangers of setting a high limit to max File Descriptors per process

file-descriptorskshsolarisulimit

I'm working on an old legacy application, and I commonly come across certain settings that no one around cam explain.

Apparently at some point, some processes in the application were hitting the max number of files descriptors allowed per process, and the then team decided to increase the limit by adding the following to the init files of their shells (.kshrc):

((nfd=16#$(/etc/sysdef | grep "file descriptor" | awk '{ print $1 }' | cut -f3 -d "x")))

ulimit -n $nfd

This increases the output of ulimit -n from 256 to 65536. Virtually every process on our machines runs with this high soft limit.

Are there any risks to having this brute approach? What is the proper way to calibrate ulimit?

Side question: How can I know the number of FD currently in use by a running process?


Environment

  • OS: SunOS …. 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-V215
  • Shell: KsH Version M-11/16/88 i

Best Answer

To see the number of file descriptors in use by a running process, run pfiles on the process id.

There can be performance impact of raising the number of fd’s available to a process, depending on the software and how it is written. Programs may use the maximum number of fd’s to size data structures such as select(3c) bitmask arrays, or perform operations such as close in a loop over all fd’s (though software written for Solaris can use the fdwalk(3c) function to do that only for the open fd’s instead of the maximum possible value).