What you could do to avoid writing a copy of the file is to write the file over itself like:
{
sed "$l1,$l2 d" < file
perl -le 'truncate STDOUT, tell STDOUT'
} 1<> file
Dangerous as you've no backup copy there.
Or avoiding sed
, stealing part of manatwork's idea:
{
head -n "$(($l1 - 1))"
head -n "$(($l2 - $l1 + 1))" > /dev/null
cat
perl -le 'truncate STDOUT, tell STDOUT'
} < file 1<> file
That could still be improved because you're overwriting the first l1 - 1 lines over themselves while you don't need to, but avoiding it would mean a bit more involved programming, and for instance do everything in perl
which may end up less efficient:
perl -ne 'BEGIN{($l1,$l2) = ($ENV{"l1"}, $ENV{"l2"})}
if ($. == $l1) {$s = tell(STDIN) - length; next}
if ($. == $l2) {seek STDOUT, $s, 0; $/ = \32768; next}
if ($. > $l2) {print}
END {truncate STDOUT, tell STDOUT}' < file 1<> file
Some timings for removing lines 1000000 to 1000050 from the output of seq 1e7
:
sed -i "$l1,$l2 d" file
: 16.2s
- 1st solution: 1.25s
- 2nd solution: 0.057s
- 3rd solution: 0.48s
They all work on the same principle: we open two file descriptors to the file, one in read-only mode (0) using < file
short for 0< file
and one in read-write mode (1) using 1<> file
(<> file
would be 0<> file
). Those file descriptors point to two open file descriptions that will have each a current cursor position within the file associated with them.
In the second solution for instance, the first head -n "$(($l1 - 1))"
will read $l1 - 1
lines worth of data from fd 0 and write that data to fd 1. So at the end of that command, the cursor on both open file descriptions associated with fds 0 and 1 will be at the start of the $l1
th line.
Then, in head -n "$(($l2 - $l1 + 1))" > /dev/null
, head
will read $l2 - $l1 + 1
lines from the same open file description through its fd 0 which is still associated to it, so the cursor on fd 0 will move to the beginning of the line after the $l2
one.
But its fd 1 has been redirected to /dev/null
, so upon writing to fd 1, it will not move the cursor in the open file description pointed to by {...}
's fd 1.
So, upon starting cat
, the cursor on the open file description pointed to by fd 0 will be at the start of the next line after $l2
, while the cursor on fd 1 will still be at the beginning of the $l1
th line. Or said otherwise, that second head
will have skipped those lines to remove on input but not on output. Now cat
will overwrite the $l1
th line with the next line after $l2
and so on.
cat
will return when it reaches the end of file on fd 0. But fd 1 will point to somewhere in the file that has not been overwritten yet. That part has to go away, it corresponds to the space occupied by the deleted lines now shifted to the end of the file. What we need is to truncate the file at the exact location where that fd 1 points to now.
That's done with the ftruncate
system call. Unfortunately, there's no standard Unix utility to do that, so we resort on perl
. tell STDOUT
gives us the current cursor position associated with fd 1. And we truncate the file at that offset using perl's interface to the ftruncate
system call: truncate
.
In the third solution, we replace the writing to fd 1 of the first head
command with one lseek
system call.
When working w/ sed
I typically find it easiest to consistently narrow my possible outcome. This is why I sometimes lean on the !
negation operator. It is very often more simple to prune uninteresting input away than it is to pointedly select the interesting kind - at least, this is my opinion on the matter.
I find this method more inline with sed
's default behavior - which is to auto-print pattern-space at script's end. For simple things such as this it can also more easily result in a robust script - a script that does not depend on certain implementations' syntax extensions in order to operate (as is commonly seen with sed
{functions}
).
This is why I recommended you do:
sed '10,15!d;/pattern/!d;=' <input
...which first prunes any line not within the range of lines 10 & 15, and from among those that remain prunes any line which does not match pattern
. If you find you'd rather have the line number sed
prints on the same line as its matched line, I would probably look to paste
in that case. Maybe...
sed '10,15!d;/pattern/!d;=' <input |
paste -sd:\\n -
...which will just alternate replacing input \n
ewlines with either a :
character or another \n
ewline.
For example:
seq 150 |
sed '10,50!d;/5/!d;=' |
paste -sd:\\n -
...prints...
15:15
25:25
35:35
45:45
50:50
Best Answer
Here are an alternative method and a bit of benchmarking, adding to that in Weijun Zhou's answer.
join
Assuming you have a
data
file you want to extract rows from and aline_numbers
file that lists the numbers of the rows you want to extract, if the sorting order of the output is not important you can use:This will number the lines of your
data
file, join it with thepadded_line_numbers
file on the first field (the default) and print out the common lines (excluding the join field itself, that is cut away).join
needs the input files to be sorted alphabetically. The aforementionedpadded_line_numbers
file has to be prepared by left-padding each line of yourline_numbers
file. E.g.:The
-w 12 -n rz
options and arguments instructnl
to output 12 digits long numbers with leading zeros.If the sorting order of the output has to match that of your
line_numbers
file, you can use:Where we are numbering the
padded_line_numbers
file, sorting the result alphabetically by its second field, joining it with the numbereddata
file and numerically sorting the result by the original sorting order ofpadded_line_numbers
.Process substitution is here used for convenience. If you can not or do not want to rely on it and, as it is likely, you are not willing to waste the storage needed for creating regular files to hold intermediate results, you can leverage named pipes:
Benchmarking
Since the peculiarity of your question is the number of rows in your
data
file, I thought it could be useful to test alternative approaches with a comparable amount of data.For my tests I used a 3.2 billion lines data file. Each line is just 2 bytes of garbage coming from
openssl enc
, hex-encoded usingod -An -tx1 -w2
and with spaces removed withtr -d ' '
:The
line_numbers
file has been created by randomly choosing 10,000 numbers between 1 and 3,221,254,963, without repetitions, usingshuf
from GNU Coreutils:The testing environment was a laptop with a i7-2670QM Intel quad-core processor, 16 GiB of memory, SSD storage, GNU/Linux,
bash
5.0 and GNU tools.The only dimension I measured has been the execution time, by means of the
time
shell builtin.Here I'm considering:
sed
solution from Weijun Zhou's answer.awk
solution from Micha's answer.perl
solution from wurtel's answer.join
solution above.perl
seems to be the fastest:awk
's performance looks comparable:join
, too, appears to be comparable:Note that the sorted version mentioned above has roughly no performance penalty over this one.
Finally,
sed
appears to be significantly slower: I killed it after approximately nine hours: