I have a file containing two columns and 10 million rows. The first column contains many repeated values, but there is a distinct value in column 2. I want to remove the repeated rows and want to keep only one using awk
. Note: the file is sorted with values in column 1. For example:
1.123 -4.0
2.234 -3.5
2.234 -3.1
2.234 -2.0
4.432 0.0
5.123 +0.2
8.654 +0.5
8.654 +0.8
8.654 +0.9
.
.
.
.
Expected output
1.123 -4.0
2.234 -3.5
4.432 0.0
5.123 +0.2
8.654 +0.5
.
.
.
.
Best Answer
A few ways:
awk
This is a very condensed way of writing this:
So, if the current first field (
$1
) is not in thea
array, print the line and add the 1st field toa
. Next time we see that field, it will be in the array and so will not be printed.Perl
or
This is basically the same as the
awk
one. The-n
causes perl to read the input file line by line and apply the script provided by-e
to each line. The-a
will automatically split each line on whitespace and save the resulting fields in the@F
array. Finally, the first field is added to the%k
hash and if it is not already there, the line is printed. The same thing could be written asCoreutils
This method works by first reversing the lines in
file
so that if a line is 12 345 it'll now be 543 21. We then useuniq -f 1
to ignore the first field, that is to say, the column that 543 is in. There are fields withinfile
. Usinguniq
here has the effect of filtering out any duplicate lines, keeping only 1 of each. Lastly we put the lines back into their original order with another reverse.GNU sort (as suggested by @StéphaneChazelas)
The
-b
flag ignores leading whitespace and the-u
means print only unique fields. The clever bit is the-k1,1
. The-k
flag sets the field to sort on. It takes the general format of-k POS1[,POS2]
which means only look at fieldsPOS1
through POS2 when sorting. So,-k1,1
means only look at the 1st field. Depending on your data, you might want to also add one of these options: