Bash – Use bash’s read builtin without a while loop

bashpipereadshell-scriptsubshell

I'm used to bash's builtin read function in while loops, e.g.:

echo "0 1
      1 1
      1 2
      2 3" |\
while read A B; do
    echo $A + $B | bc;
done

I've been working on some make project, and it became prudent to split files and store intermediary results. As a consequence I often end up shredding single lines into variables. While the following example works pretty well,

head -n1 somefile | while read A B C D E FOO; do [... use vars here ...]; done

it's sort of stupid, because the while loop will never run more than once. But without the while,

head -n1 somefile | read A B C D E FOO; [... use vars here ...]

The read variables are always empty when I use them. I never noticed this behaviour of read, because usually I'd use while loops to process many similar lines. How can I use bash's read builtin without a while loop? Or is there another (or even better) way to read a single line into multiple (!) variables?

Conclusion

The answers teach us, it's a problem of scoping. The statement

 cmd0; cmd1; cmd2 | cmd3; cmd4

is interpreted such that the commands cmd0, cmd1, and cmd4 are executed in the same scope, while the commands cmd2 and cmd3 are each given their own subshell, and consequently different scopes. The original shell is the parent of both subshells.

Best Answer

It's because the part where you use the vars is a new set of commands. Use this instead:

head somefile | { read A B C D E FOO; echo $A $B $C $D $E $FOO; }

Note that, in this syntax, there must be a space after the { and a ; (semicolon) before the }.  Also -n1 is not necessary; read only reads the first line.

For better understanding, this may help you; it does the same as above:

read A B C D E FOO < <(head somefile); echo $A $B $C $D $E $FOO

Edit:

It's often said that the next two statements do the same:

head somefile | read A B C D E FOO
read A B C D E FOO < <(head somefile)

Well, not exactly. The first one is a pipe from head to bash's read builtin. One process's stdout to another process's stdin.

The second statement is redirection and process substitution. It is handled by bash itself. It creates a FIFO (named pipe, <(...)) that head's output is connected to, and redirects (<) it to the read process.

So far these seem equivalent. But when working with variables it can matter. In the first one the variables are not set after executing. In the second one they are available in the current environment.

Every shell has another behavior in this situation. See that link for which they are. In bash you can work around that behavior with command grouping {}, process substitution (< <()) or Here strings (<<<).

Related Question