Joining text files with 600M+ lines

awkcommand linejoin;sedsorting

I have two files, huge.txt and small.txt. huge.txt has around 600M rows and it's 14 GB. Each line has four space separated words (tokens) and finally another space separated column with a number. small.txt has 150K rows with a size of ~3M, a space separated word and a number.

Both files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-).

The desired output would contain all columns from the huge.txt file and the second column (the number) from small.txt where the first word of huge.txt and the first word of small.txt match.

My attempts below failed miserably with the following error:

cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt

join: memory exhausted  

What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using:

sort -k1 huge.unsorted.txt > huge.txt
sort -k1 small.unsorted.txt > small.txt

The problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictionary sorting using the -d option bumping into the same error at the end.

I tried loading the files into MySQL, create indexes and join them, but it seems to take weeks on my laptop. (I don't have a computer with more memory or fast disk/SSD for this task)

I see two ways out of this but don't know how to implement any of them.

  1. How do I sort the files in a way that the join command considers them to be sorted properly?

  2. I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes but leave the numbers intact at the end of the lines. Do the sorting and joining with the hashes instead of the strings themselves and finally "translate" back the hashes to strings. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic?

See file samples at the end.

Sample of huge.txt

had stirred me to 46 
had stirred my corruption 57 
had stirred old emotions 55 
had stirred something in 69 
had stirred something within 40 

Sample of small.txt

caley 114881 
calf 2757974 
calfed 137861 
calfee 71143 
calflora 154624 
calfskin 148347 
calgary 9416465 
calgon's 94846 
had 987654

Desired output:

had stirred me to 46 987654
had stirred my corruption 57 987654
had stirred old emotions 55 987654
had stirred something in 69 987654
had stirred something within 40 987654

Best Answer

IMO the best way to do this would be to use the programming/scripting language you know best and:

  1. load small.txt into an in-memory hash/map/associative array keyed on the words
  2. Process huge.txt line by line, adding the column looked up from the hash and writing the result into an output file
  3. Buffer input and output so that it happens in chunks of at least 4K
Related Question