I'd use1 find
with two -exec
actions e.g.:
find . -type f -exec grep -qF SOME_STRING {} \; -exec sed 'COMMAND' {} \;
The second command will run only if the first one evaluates to true i.e. exit code 0
so sed
will process the file in question only if the file contains SOME_STRING. It's easy to see how it works:
find . -type f -exec grep -qF SOME_STRING {} \; -print
it should list only those files that contain SOME_STRING. Sure, you can always chain more than two expressions and also use operators like !
(negation) e.g.:
find . -type f -exec grep -qF THIS {} \; ! -exec grep -qF THAT {} \; -print
will list only those files that contain THIS but don't contain THAT.
Anyway, in your case:
gfind /tmp/ -type f \( -name "*.h" -o -name "*.cpp" \) \
-exec ggrep -qF LARGE_INTEGER {} \; \
-exec gsed -i '1s/^/#include <stdint.h>\n/' {} \;
1
I assume your xargs
doesn't support -0
or --null
option. If it does, use the following construct:
find . -type f -exec grep -lFZ SOME_STRING {} + | xargs -0 gsed -s -i 'COMMAND'
i.e. in your case:
gfind /tmp/ -type f \( -name "*.h" -o -name "*.cpp" \) \
-exec ggrep -lFZ LARGE_INTEGER {} + | \
xargs -0 gsed -s -i '1s/^/#include <stdint.h>\n/'
It should be more efficient than the first one.
Also, both will work with all kind of file names. Note that I'm using grep
with -F
(fixed string) as it is faster so remove it if you're planning to use a regex instead.
-path
works exactly like -name
, but applies the pattern to the entire pathname of the file being examined, instead of to the last component.
-prune
forbids descending below the found file, in case it was a directory.
Putting it all together, the command
find $HOME -path $HOME/$dir_name -prune -o -name "*$file_suffix" -exec cp {} $HOME/$dir_name/ \;
- Starts looking for files in
$HOME
.
- If it finds a file matching
$HOME/$dir_name
it won't go below it ("prunes" the subdirectory).
- Otherwise (
-o
) if it finds a file matching *$file_suffix
copies it into $HOME/$dir_name/
.
The idea seems to be make a backup of some of the contents of $HOME
in a subdirectory of $HOME
. The parts with -prune
is obviously necessary in order to avoid making backups of backups...
Best Answer
I'm not sure:
is really what you meant. That would mean grep recursively in all the non-hidden files and dirs in
/
(but still look inside hidden files and dirs inside those).Assuming you meant:
A few things to note:
grep
implementations support-r
. And among those that do, the behaviours differ: some follow symlinks to directories when traversing the directory tree (which means you may end up looking several times in the same file or even run in infinite loops), some will not. Some will look inside device files (and it will take quite some time in/dev/zero
for instance) or pipes or binary files..., some will not.grep
starts looking inside files as soon as it discovers them. But while it looks in a file, it's no longer looking for more files to search in (which is probably just as well in most cases)Your:
(removed the
-r
which didn't make sense here) is terribly inefficient because you're running onegrep
per file.;
should only be used for commands that accept only one argument. Moreover here, becausegrep
looks only in one file, it will not print the file name, so you won't know where the matches are.You're not looking inside device files, pipes, symlinks..., you're not following symlinks, but you're still potentially looking inside things like
/proc/mem
.would be a lot better because as few
grep
commands as possible would be run. You'd get the file name unless the last run has only one file. For that it's better to use:or with GNU
grep
:Note that
grep
will not be started untilfind
has found enough files for it to chew on, so there will be some initial delay. Andfind
will not carry on searching for more files until the previousgrep
has returned. Allocating and passing the big file list has some (probably negligible) impact, so all in all it's probably going to be less efficient than agrep -r
that doesn't follow symlink or look inside devices.With GNU tools:
As above, as few
grep
instances as possible will be run, butfind
will carry on looking for more files while the firstgrep
invocation is looking inside the first batch. That may or may not be an advantage though. For instance, with data stored on rotational hard drives,find
andgrep
accessing data stored at different locations on the disk will slow down the disk throughput by causing the disk head to move constantly. In a RAID setup (wherefind
andgrep
may access different disks) or on SSDs, that might make a positive difference.In a RAID setup, running several concurrent
grep
invocations might also improve things. Still with GNU tools on RAID1 storage with 3 disks,might increase the performance significantly. Note however that the second
grep
will only be started once enough files have been found to fill up the firstgrep
command. You can add a-n
option toxargs
for that to happen sooner (and pass fewer files pergrep
invocation).Also note that if you're redirecting
xargs
output to anything but a terminal device, then thegreps
s will start buffering their output which means that the output of thosegrep
s will probably be incorrectly interleaved. You'd have to usestdbuf -oL
(where available like on GNU or FreeBSD) on them to work around that (you may still have problems with very long lines (typically >4KiB)) or have each write their output in a separate file and concatenate them all in the end.Here, the string you're looking for is fixed (not a regexp) so using the
-F
option might make a difference (unlikely asgrep
implementations know how to optimise that already).Another thing that could make a big difference is fixing the locale to C if you're in a multi-byte locale:
To avoid looking inside
/proc
,/sys
..., use-xdev
and specify the file systems you want to search in:Or prune the paths you want to exclude explicitly: