Sort command and THE most recent log file
Why using -r
(reverse sort order) than reaching end of output with tail
?
Using normal sort order and take first entry would be quicker!
tail -f `/bin/ls -1td /path/to/log/file/*| /usr/bin/head -n1`
or
tail -f $(/bin/ls -1t /path/to/log/file/* | /bin/sed q)
work fine.
Nota: I like to use sed
because this command is present in /bin
, maybe before /usr
are mounted.
tail -f `/bin/ls -1tr /path/to/log/file/* | /bin/sed -ne '$p'`
would work but, as already said: inversing sort order, than dropping whole ouput for using only last entry is not a real good idea ;-)
Warning, in last directory, *
have to not match a directory, or else command tail
won't know how to open it.
Same but using find
for searching for most recent file:
read -a file < <(
find /tmp 2>/dev/null -type f -mmin +-1 -mmin -10 -printf "%Ts %p\n" |
sort -rn)
tail -f ${file[1]}
Note:
- the
-mmin +-1
ensure to not list bad timed files: in the futur.
read
is builtin, create an array and prevent the use of head -n1| cut -d \ -f2
-mmin -10
could be changed or suppressed, but this prevent long sort process.
But tail support to watch about more than one file:
Try to open two shell console and try this:
In 1st console:
user@host[pts/1]:~$ touch /tmp/file_{1,2,3}
user@host[pts/1]:~$ tail -f /tmp/file_{1,2,3}
==> /tmp/file_1 <==
==> /tmp/file_2 <==
==> /tmp/file_3 <==
in second one, while keeping 1st console visible, hit many time:
user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM
25285
user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM
16381
user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM
19766
user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM
3053
1st console could look like:
==> /tmp/file_2 <==
25285
==> /tmp/file_1 <==
16381
19766
==> /tmp/file_3 <==
3053
...
In the idea of SO question, but time based, multi files
By using find
command, we could watch on last minutes modified files -mmin
or last days -mtime
:
find /path/to/logdir -type f -mmin -10 -exec tail -f {} +
for watching for logfiles modified last 10 minutes.
Note:
- Have a look at
man tail
, about
-F
option for long time watch
-q
option for not printing file names
Fancy formatting
find /path/to/logdir -type f -mmin -10 -exec tail -f {} + |
sed -une 's/^==> .path.to.logdir.\(.*\) <==$/\1 /;ta;bb;
:a;s/^\(.\{12\}\) *$/\1: /;h;bc;
:b;G;s/^\(..*\)\n\(.*\)/\2 \1/p;:c;'
Where you could modify .path.to.logdir.
and change 12
for more suitable length.
For sample, keeping our two console, stop 1st and try
user@host[pts/1]:~$ find /tmp/ -type f -mtime -1 -name 'file_?' -exec tail -f {} + |
sed -une 's/^==> .tmp.\(.*\) <==$/\1 /;ta;bb;
:a;s/^\(.\{12\}\) *$/\1: /;h;bc;
:b;G;s/^\(..*\)\n\(.*\)/\2 \1/p;:c;'
file_2 : 25285
file_1 : 16381
file_1 : 19766
file_3 : 3053
than in second console, hit again some
user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM
It is possible like this, but as others have said, the safest option is the generation of a new file and then a move of that file to overwrite the original.
The below method loads the lines into BASH, so depending on the number of lines from tail
, that's going to affect the memory usage of the local shell to store the content of the log lines.
The below also removes empty lines should they exist at the end of the log file (due to the behaviour of BASH evaluating "$(tail -1000 test.log)"
) so does not give a truly 100% accurate truncation in all scenarios, but depending on your situation, may be sufficient.
$ wc -l myscript.log
475494 myscript.log
$ echo "$(tail -1000 myscript.log)" > myscript.log
$ wc -l myscript.log
1000 myscript.log
Best Answer
This won't (can't) work - the sub-shells you're using to determine the latest file only get executed once. The glob won't work because the shell evaluates the wild-card only ONCE.
One way to deal with it would be to use 'watch' to run the whole thing, but that would hack up the output every so many seconds.