The size of the directory (as seen with ls -ld /var/lib/php/sessions
) can give an indication. If it's small, there aren't many files. If it's large, there may be many entries in there, or there may have been many in the past.
Listing the content, as long as you don't stat
individual files, shouldn't take a lot much longer than reading a file the same size.
What might happen is that you have an alias for ls
that does ls -F
or ls --color
. Those options cause an lstat
system call to be performed on every file to see for instance if they are a file or directory.
You'll also want to make sure that you list dot files and that you leave the file list unsorted. For that, run:
command ls -f /var/lib/php/sessions | wc -l
Provided not too many filenames have newline characters, that should give you a good estimate.
$ ls -lhd 1
drwxr-xr-x 2 chazelas chazelas 69M Aug 15 20:02 1/
$ time ls -f 1 | wc -l
3218992
ls -f 1 0.68s user 1.20s system 99% cpu 1.881 total
wc -l 0.00s user 0.18s system 9% cpu 1.880 total
$ time ls -F 1 | wc -l
<still running...>
You can also deduce the number of files there by subtracting the number of unique files elsewhere in the file system from the number of used inodes in the output of df -i
.
For instance, if the file system is mounted on /var
, with GNU find
:
find /var -xdev -path /var/lib/php/sessions -prune -o \
-printf '%i\n' | sort -u | wc -l
To find the number of files not in /var/lib/php/sessions. If you subtract that to the IUsed
field in the output of df -i /var
, you'll get an approximation (because some special inodes are not linked to any directory in a typical ext file system) of the number of files linked to /var/lib/php/sessions
that are not otherwise linked anywhere else (note that /var/lib/php/sessions could very well contain one billion entries for the same file (well actually the maximum number of links on a file is going to be much lower than that on most filesystems), so that method is not fool-proof).
Note that if reading the directory content should be relatively fast, removing files can be painfully slow.
rm -r
, when removing files, first lists the directory content, and then calls unlink()
for every file. And for every file, the system has to lookup the file in that huge directory, which if it's not hashed can be very expensive.
tl;dr ls -U /proc/PID/fd | wc -l
will tell you the number that should be less than ulimit -n
.
/proc/PID/fd
should contain all the file descriptors opened by a process, including but not limited to strange ones like epoll
or inotify
handles, "opaque" directory handles opened with O_PATH
, handles opened with signalfd()
or memfd_create()
, sockets returned by accept()
, etc.
I'm not a great lsof
user, but lsof
is getting its info from /proc
, too. I don't think there's another way to get the list of the file descriptors a process has opened on Linux other than procfs
, or by attaching to a process with ptrace
.
Anyways, the current and root directory, mmapped files (including its own binary and dynamic libraries) and controlling terminal of a process are not counted against the limit set with ulimit -n
(RLIMIT_NOFILE
), and they also don't appear in /proc/PID/fd
unless the process is explicitly holding open handles to them.
Best Answer
It is Important to know that there are two kinds of limits:
Solution for a single session
In the shell set the soft limit:
This example will raise the actual limit to 2048 but the command will succeed only if the hard limit (check:
ulimit -Hn
) is the same or higher. If you need higher values, raise the hard limit using one of the methods below. The limits are set per process and they are inherited by newly spawned processes, so anything you run after this command in the same shell will have the new limits.Changing hard limit in a single session
This is not easy because only root can change a hard limit and after switching to root you have to switch back to the original user. Here is the solution with
sudo
:System-wide solution
In Debian and many other systems using
pam_limits
you can set the system-wide limits in/etc/security/limits.conf
and in files in/etc/security/limits.d
. The conf file contains description. Example lines:This will set the hard limit and default soft limit for users in group
webadmins
after login.Other limits
The hard limit value is limited by global limit of open file descriptors value in
/proc/sys/fs/file-max
which is pretty high by default in modern Linux distributions. This value is limited byNR_OPEN
value used during kernel compilation.Is there not a better solution?
Maybe you could check if all the
*log
files you feed totail -f
are really active files which need to be monitored. It is possible that some of them are already closed for logging and you can just open a smaller number of files.