I have a huge (70GB), one line, text file and I want to replace a string (token) in it.
I want to replace the token <unk>
, with another dummy token (glove issue).
I tried sed
:
sed 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new
but the output file corpus.txt.new
has zero-bytes!
I also tried using perl:
perl -pe 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new
but I got an out of memory error.
For smaller files, both of the above commands work.
How can I replace a string is such a file?
This is a related question, but none of the answers worked for me.
Edit:
What about splitting the file in chunks of 10GBs (or whatever) each and applying sed
on each one of them and then merging them with cat
? Does that make sense? Is there a more elegant solution?
Best Answer
The usual text processing tools are not designed to handle lines that don't fit in RAM. They tend to work by reading one record (one line), manipulating it, and outputting the result, then proceeding to the next record (line).
If there's an ASCII character that appears frequently in the file and doesn't appear in
<unk>
or<raw_unk>
, then you can use that as the record separator. Since most tools don't allow custom record separators, swap between that character and newlines.tr
processes bytes, not lines, so it doesn't care about any record size. Supposing that;
works:You could also anchor on the first character of the text you're searching for, assuming that it isn't repeated in the search text and it appears frequently enough. If the file may start with
unk>
, change the sed command tosed '2,$ s/…
to avoid a spurious match.Alternatively, use the last character.
Note that this technique assumes that sed operates seamlessly on a file that doesn't end with a newline, i.e. that it processes the last partial line without truncating it and without appending a final newline. It works with GNU sed. If you can pick the last character of the file as the record separator, you'll avoid any portability trouble.