I am trying to find the reason why my long-running app sometimes busts the maximum open file descriptor limit (ulimit -n
). I would like to periodically log how many file descriptors the app has open so that I can see when the spike occurred.
I know that lsof
includes a bunch of items that are excluded from /proc/$PID/fd
… Are those items relevant with regard to the open file descriptor limit? I.e. should I be logging info from lsof
or from /proc/$PID/fd
?
Best Answer
tl;dr
ls -U /proc/PID/fd | wc -l
will tell you the number that should be less thanulimit -n
./proc/PID/fd
should contain all the file descriptors opened by a process, including but not limited to strange ones likeepoll
orinotify
handles, "opaque" directory handles opened withO_PATH
, handles opened withsignalfd()
ormemfd_create()
, sockets returned byaccept()
, etc.I'm not a great
lsof
user, butlsof
is getting its info from/proc
, too. I don't think there's another way to get the list of the file descriptors a process has opened on Linux other thanprocfs
, or by attaching to a process withptrace
.Anyways, the current and root directory, mmapped files (including its own binary and dynamic libraries) and controlling terminal of a process are not counted against the limit set with
ulimit -n
(RLIMIT_NOFILE
), and they also don't appear in/proc/PID/fd
unless the process is explicitly holding open handles to them.