If you run find
with exec
, {}
expands to the filename of each file or directory found with find
(so that ls
in your example gets every found filename as an argument - note that it calls ls
or whatever other command you specify once for each file found).
Semicolon ;
ends the command executed by exec
. It needs to be escaped with \
so that the shell you run find
inside does not treat it as its own special character, but rather passes it to find
.
See this article for some more details.
Also, find
provides some optimization with exec cmd {} +
- when run like that, find
appends found files to the end of the command rather than invoking it once per file (so that the command is run only once, if possible).
The difference in behavior (if not in efficiency) is easily noticeable if run with ls
, e.g.
find ~ -iname '*.jpg' -exec ls {} \;
# vs
find ~ -iname '*.jpg' -exec ls {} +
Assuming you have some jpg
files (with short enough paths), the result is one line per file in first case and standard ls
behavior of displaying files in columns for the latter.
It's ever-so clunky but you can add every line to an array and at the end —when you know the length— output everything but the last 3 lines.
... | awk '{l[NR] = $0} END {for (i=1; i<=NR-3; i++) print l[i]}'
Another (more efficient here) approach is manually stacking in three variables:
... | awk '{if (a) print a; a=b; b=c; c=$0}'
a
only prints after a line has moved from c
to b
and then into a
so this limits it to three lines. The immediate upsides are it doesn't store all the content in memory and it shouldn't cause buffering issues (fflush()
after printing if it does) but the downside here is it's not simple to scale this up. If you want to skip the last 100 lines, you need 100 variables and 100 variable juggles.
If awk had push
and pop
operators for arrays, it would be easier.
Or we could pre-calculate the number of lines and how far we actually want to go with $(($(wc -l < file) - 3))
. This is relatively useless for streamed content but on a file, works pretty well:
awk -v n=$(($(wc -l < file) - 3)) 'NR<n' file
Typically speaking you'd just use head
though:
$ seq 6 | head -n-3
1
2
3
Using terdon's benchmark we can actually see how these compare. I thought I'd offer a full comparison though:
head
: 0.018s (me)
awk
+ wc
: 0.169s (me)
awk
3 variables: 0.178s (me)
awk
double-file: 0.322s (terdon)
awk
circular buffer: 0.355s (Scrutinizer)
awk
for-loop: 0.693s (me)
The fastest solution is using a C-optimised utility like head
or wc
handle the heavy lifting things but in pure awk
, the manually rotating stack is king for now.
Best Answer
awk
- this is the interpreter for the AWK Programming Language. The AWK language is useful for manipulation of data files, text retrieval and processing-F <value>
- tellsawk
what field separator to use. In your case,-F:
means that the separator is:
(colon).'{print $4}'
means print the fourth field (the fields being separated by:
).Example:
Let's say that there's a file called
test
, and it contains the following:If we execute the command
awk -F: '{print $4}' test
, the output will be:Because
is
is the fourth field.