What are the costs of increasing `/proc/sys/fs/inotify/max_user_watches` value

fanotifyinotify

In order to watch my home directory and all subdirectories recursively for 60 seconds:

$ inotifywatch -v -r -t 60 /path

You may get Failed to watch /path; upper limit on inotify watches reached! error , which you can fix by rising limit, e.g. to 128k: # echo $[ 128*1024 ] | tee /proc/sys/fs/inotify/max_user_watches

This made me wonder:

What exact costs does having n inotify watches bring?

I ask in both: concrete and asypthotic complexities costs (I didn't dig yet, what datastructures at which parts of kernel stack and how are hooked as implementation of inotify).

I mean : computational, memory, and other costs.

I imagine, those to be functions (giving concrete numbers in KiB, or estimates of cpu load (maybe there are some good benchmarks), or even asymptotic (e.g. "each io )) of :

  • files/directories watched
  • operations on files/directories performed
  • lengths of inotify watches queues

but maybe I've missed something?

I didn't dig yet into architecture, but I wonder if does it affect operations on non watched inodes/directories/files/paths?

Similarily, how does it differ for fanotify ?

Best Answer

I don't use inotifywatch, I use gidget, so my answer isn't specific to that tool, it's just a hopefully useful observation about inotify (which I heavily use).

Each inotify watch uses 540 bytes of kernel memory on 32-bit architectures, and 1080 bytes on 64-bit architectures. Kernel memory is unswappable. So there is a memory cost, certainly.

But in my experience it's not the number of watches that slows things down - the kernel checks them quickly. What tends to impact system performance, in real use, is whatever you are doing when the inotify events trigger. If you have (for example) ten to forty thousand HIPAA compliant 835 files arriving at wire speed over a gigabit or ten gig link, the user processes being fired up to deal with each one are going to hammer the system far harder than inotify itself.

To answer at least one of your questions, though: No, setting watches will not have any effect/cost for unwatched files or folders. The kernel always checks to see if there is a watch set, and the check will take the same amount of time and resources as long as there isn't one, no matter how many other filesystem objects are being watched. But again, if you are constantly generating vast numbers of processes (for any reason) that will definitely have an impact on total system performance.

Also, you mentioned that the tool you're using can recursively watch folders & subfolders. That is not something that inotify does, it's something your tool is doing. It's probably scanning the folder you've targeted for subfolders, setting up watches on all of them, and then triggering whenever a new one is created to set up another watch. So you've got some overhead activity going on there that is only indirectly related to the inotify system, and the impact it has on performance is going to be mostly due to the tool's behavior not the behavior of inotify. If the tool is sloppy and resource inefficient (I don't know, but I kind of doubt it, since inotify tools are usually written in C) this could be an issue.