I was doing a very simple search:
grep -R Milledgeville ~/Documents
And after some time this error appeared:
grep: memory exhausted
How can I avoid this?
I have 10GB of RAM on my system and few applications running, so I am really surprised a simple grep runs out of memory. ~/Documents
is about 100GB and contains all kinds of files.
grep -RI
might not have this problem, but I want to search in binary files too.
Best Answer
Two potential problems:
grep -R
(except for the modified GNUgrep
found on OS/X 10.8 and above) follows symlinks, so even if there's only 100GB of files in~/Documents
, there might still be a symlink to/
for instance and you'll end up scanning the whole file system including files like/dev/zero
. Usegrep -r
with newer GNUgrep
, or use the standard syntax:(however note that the exit status won't reflect the fact that the pattern is matched or not).
grep
finds the lines that match the pattern. For that, it has to load one line at a time in memory. GNUgrep
as opposed to many othergrep
implementations doesn't have a limit on the size of the lines it reads and supports search in binary files. So, if you've got a file with a very big line (that is, with two newline characters very far appart), bigger than the available memory, it will fail.That would typically happen with a sparse file. You can reproduce it with:
That one is difficult to work around. You could do it as (still with GNU
grep
):That converts sequences of NUL characters into one newline character prior to feeding the input to
grep
. That would cover for cases where the problem is due to sparse files.You could optimise it by doing it only for large files:
If the files are not sparse and you have a version of GNU
grep
prior to2.6
, you can use the--mmap
option. The lines will be mmapped in memory as opposed to copied there, which means the system can always reclaim the memory by paging out the pages to the file. That option was removed in GNUgrep
2.6