AWK, SED, CSV – Converting Key+N-Values Text File to CSV

awkcsvsed

Newbie on this site, so apologies if I ask this question badly:

I have a text file in the following format:

file: abc
value: 123
value: 234
value: 567
file: def
value: 999
file: ghi
value: 123
value: 999

My goal is to write a bash script that converts this text to CSV format, where the file value is repeated for each row. The data above needs to look like this:

abc,123
abc,234
abc,567
def,999
ghi,123
ghi,999

I have been trying with sed but don't see how to remember the file name across multiple lines. I'm guessing this might be easier with awk but I haven't "grawked" awk yet.

Would appreciate your help!

Best Answer

The following awk command sets the awk variable file to the value of the file key whenever such a key is found (the key is the first field on the line, the value is the second). If the current line has no file key, the current value of the file variable is outputted together with the value of the current line.

$ awk -F ': ' 'BEGIN { OFS="," } $1 == "file" { file = $2; next } { print file, $2 }' file
abc,123
abc,234
abc,567
def,999
ghi,123
ghi,999

Note that this does not attempt to quote the values correctly for CSV and that it assumes that no value contains the field delimiter : (colon+space).


With sed:

sed -n \
    -e '/^file: /  { s///; h; }' \
    -e '/^value: / { s///; G; s/\(.*\)\n\(.*\)/\2,\1/p; }' file

When a file: line is found, the file: prefix string is stripped off and the remainder is stored in the hold space.

When a value: line is found, the value: prefix string is stripped off, and the text in the hold-space is appended to the end of the buffer with a literal newline character as the delimiter. The newline-delimited parts of the buffer are swapped (newline replaced by a comma) and outputted.

The result is the same as what is expected.

This does not have the restriction that the values after the initial key: string can't include a colon+space. Again, the final output will not have any special CSV encoding of the text, so fields containing embedded commas and double quotes would confuse a CSV parser.


The following modifies the input by adding an empty line between each line in the original file. This makes the file a valid "XTAB" file with : as the key-value delimiter. This is then read by Miller (mlr), which is aware of the special quoting rules of CSV and that can read the XTAB format.

Miller reads the records from the awk output, and performs a "fill-down" operation with the file data, assigning the previous file value to each of the records that do not have one.

The subsequent "filter" operation removes all records that do not have a value field.

The data is then outputted without a CSV header.

awk '{ print; print "" }' file | 
mlr --ixtab --ips ': ' \
    --ocsv --headerless-csv-output \
    fill-down -f file then filter -x 'is_absent($value)'

I've modified the test data to show that this is able to properly produce fully compliant CSV output even if the input contains commas and quotes:

$ cat file
file: test: here's a test
value: this is, the value
value: another so called "value"
file: abc
value: 123
value: 234
value: 567
file: def
value: 999
file: ghi
value: 123
value: 999
$ awk '{ print; print "" }' file | mlr --ixtab --ips ': ' --ocsv --headerless-csv-output fill-down -f file then filter -x 'is_absent($value)'
"this is, the value",test: here's a test
"another so called ""value""",test: here's a test
123,abc
234,abc
567,abc
999,def
123,ghi
999,ghi
Related Question