I'm not sure:
grep -r -i 'the brown dog' /*
is really what you meant. That would mean grep recursively in all the non-hidden files and dirs in /
(but still look inside hidden files and dirs inside those).
Assuming you meant:
grep -r -i 'the brown dog' /
A few things to note:
- Not all
grep
implementations support -r
. And among those that do, the behaviours differ: some follow symlinks to directories when traversing the directory tree (which means you may end up looking several times in the same file or even run in infinite loops), some will not. Some will look inside device files (and it will take quite some time in /dev/zero
for instance) or pipes or binary files..., some will not.
- It's efficient as
grep
starts looking inside files as soon as it discovers them. But while it looks in a file, it's no longer looking for more files to search in (which is probably just as well in most cases)
Your:
find / -type f -exec grep -i 'the brown dog' {} \;
(removed the -r
which didn't make sense here) is terribly inefficient because you're running one grep
per file. ;
should only be used for commands that accept only one argument. Moreover here, because grep
looks only in one file, it will not print the file name, so you won't know where the matches are.
You're not looking inside device files, pipes, symlinks..., you're not following symlinks, but you're still potentially looking inside things like /proc/mem
.
find / -type f -exec grep -i 'the brown dog' {} +
would be a lot better because as few grep
commands as possible would be run. You'd get the file name unless the last run has only one file. For that it's better to use:
find / -type f -exec grep -i 'the brown dog' /dev/null {} +
or with GNU grep
:
find / -type f -exec grep -Hi 'the brown dog' {} +
Note that grep
will not be started until find
has found enough files for it to chew on, so there will be some initial delay. And find
will not carry on searching for more files until the previous grep
has returned. Allocating and passing the big file list has some (probably negligible) impact, so all in all it's probably going to be less efficient than a grep -r
that doesn't follow symlink or look inside devices.
With GNU tools:
find / -type f -print0 | xargs -r0 grep -Hi 'the brown dog'
As above, as few grep
instances as possible will be run, but find
will carry on looking for more files while the first grep
invocation is looking inside the first batch. That may or may not be an advantage though. For instance, with data stored on rotational hard drives, find
and grep
accessing data stored at different locations on the disk will slow down the disk throughput by causing the disk head to move constantly. In a RAID setup (where find
and grep
may access different disks) or on SSDs, that might make a positive difference.
In a RAID setup, running several concurrent grep
invocations might also improve things. Still with GNU tools on RAID1 storage with 3 disks,
find / -type f -print0 | xargs -r0 -P2 grep -Hi 'the brown dog'
might increase the performance significantly. Note however that the second grep
will only be started once enough files have been found to fill up the first grep
command. You can add a -n
option to xargs
for that to happen sooner (and pass fewer files per grep
invocation).
Also note that if you're redirecting xargs
output to anything but a terminal device, then the greps
s will start buffering their output which means that the output of those grep
s will probably be incorrectly interleaved. You'd have to use stdbuf -oL
(where available like on GNU or FreeBSD) on them to work around that (you may still have problems with very long lines (typically >4KiB)) or have each write their output in a separate file and concatenate them all in the end.
Here, the string you're looking for is fixed (not a regexp) so using the -F
option might make a difference (unlikely as grep
implementations know how to optimise that already).
Another thing that could make a big difference is fixing the locale to C if you're in a multi-byte locale:
find / -type f -print0 | LC_ALL=C xargs -r0 -P2 grep -Hi 'the brown dog'
To avoid looking inside /proc
, /sys
..., use -xdev
and specify the file systems you want to search in:
LC_ALL=C find / /home -xdev -type f -exec grep -i 'the brown dog' /dev/null {} +
Or prune the paths you want to exclude explicitly:
LC_ALL=C find / \( -path /dev -o -path /proc -o -path /sys \) -prune -o \
-type f -exec grep -i 'the brown dog' /dev/null {} +
Assuming the top-most keys of all documents are always the same across all documents, extract the keys into a separate variable, then reduce (accumulate) the data over these keys.
jq -s '
(.[0] | keys[]) as $k |
reduce .[] as $item (null; .[$k] += $item[$k])' file*.json
Note the use of -s
to read all the input into a single array.
This, more or less, iterates over the keys Lists1
and Lists2
for each document, accumulating the data in a new structure (null
from the start).
Assuming that the input JSON documents are well-formed:
{
"Lists1": [{"point":"a","coordinates":[2289.48096,2093.48096]}],
"Lists2": [{"point":"b","coordinates":[2289.48096,2093.48096]}]
}
{
"Lists1": [{"point":"c","coordinates":[2289.48096,2093.48096]}],
"Lists2": [{"point":"d","coordinates":[2289.48096,2093.48096]}]
}
You will get the following resulting document containing two objects:
{
"Lists1": [{"point":"a","coordinates":[2289.48096,2093.48096]},{"point":"c","coordinates":[2289.48096,2093.48096]}]
}
{
"Lists2": [{"point":"b","coordinates":[2289.48096,2093.48096]},{"point":"d","coordinates":[2289.48096,2093.48096]}]
}
Would you want the two keys in the same object:
jq -s '
[ (.[0] | keys[]) as $k |
reduce .[] as $item (null; .[$k] += $item[$k]) ] | add' file*.json
Best Answer
Using
grep
on JSON files might work, but it relies on the file's formatting to be as expected. It's additionally non-trivial to also test for a non-empty value of theidentifier
key at the same time as identifying version2.0
, taking into account that the ordering of keys in a generic JSON document is not fixed. So, yes, it is better to usejq
for this.The task is only to touch JSON files that need to change. These are files where
version
has the value2.0
and whereidentifier
is non-empty.With
-e
,jq
can be made to exit with an exit status given by the last evaluated expression, and we can use this to test whether the current file is to be modified or not. Withany()
, we may check whether any of the selected input objects has a non-emptyidentifier
value:This will exit with an exit status of zero ("success") if the current JSON document needs modifications.
As part of your
find
command:Note that any document that is changed by that second call to
jq
would be rewritten, meaning it would potentially change indentation and other whitespaces in the file apart from just theidentifier
key's value. This does not impact the JSON document from a parser's perspective but could trigger tools that aren't JSON-aware to report further changes to the file.If you want to write JSON with four spaces of indentation, then add
--indent 4
to the second invocation ofjq
.