Is it fine to use tail -f on large log files

logsmonitoringtail

I would like monitor a large log file (close to 1 GB) for errors. I want this to be close to real time (few seconds delay is fine). My plan is to use tail -f | grep. Is there any performance issues with using such a method when running it over a long time, say from zero bytes to 1 GB? Are there any standards practices used for such a monitoring. Note that I would like to do this using standard unix commands available on Solaris 10.

If that is possible, my file even rolls over and I have one more problem to sort out :). using tail -F (--follow=name) is not an option for me because -F is not supported in the server I want to run this on. My plan is to use a script which will start this tail and poll on to find if the file is rolled over. If yes, then kill the tail and restart it. Any better approach?

Best Answer

On my linux system (GNU coreutils 8.12), I was able to check (using strace) that tail -f¹ uses the lseek system call to skip over most of the file quickly:

lseek(3, 0, SEEK_CUR)                   = 0
lseek(3, 0, SEEK_END)                   = 194086
lseek(3, 188416, SEEK_SET)              = 188416

This means that the size of the tracked file should not matter in anyway.

Maybe you can check if the same applies on your system. (Obviously, it should be the case.)


1. I also tried disabling inotify support with the undocumented ---disable-inotify, just in case.

Related Question