Bash – Races when piping two commands to a named pipe

bashpipe

I want to have one process reading from a named pipe that receives data from multiple sources:

$ mkfifo /tmp/p

But I can't figure out how to get it to work consistently.

First Scenario – this works

tty1:

Set up two processes to write to my fifo; both of these will block:

$ echo 'first' > /tmp/p; echo 'second' > /tmp/p

tty2:

Read from the pipe:

$ cat /tmp/p
first
second

This still works if I execute the above in reverse order

My problem comes when I have two separate commands that I want to come out of the pipe:

Second Scenario – does not work

first.sh

#!/bin/sh
echo 'first' > /tmp/p

second.sh

#!/bin/sh
echo 'second' > /tmp/p

tty1

$ sh first.sh; sh second.sh

tty2

$ cat /tmp/p
first

The execution of sh second.sh from my first tty will block indefinitely, until something else reads from the named pipe.

What I think is happening

From http://linux.die.net/man/7/pipe:

If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from the pipe will see end-of-file (read(2) will return 0)

So when echo exits in first.sh, the shell executing it closes the file descriptor for /tmp/p, which means that cat in my second TTY sees EOF.

How do I get around this with the shell? Is there a way to keep a reference to the read end of the named pipe around in my main controlling script, so that it doesn't get closed when sub-shells exit? In practise, I will be passing the path to the named pipe to the sub-shells. Do I need to just make my subshells output to their own stdout and perform a redirection on them?

I feel like there's something I'm missing here. Using named pipes has been simple and straightforward for everything I've tried to do aside from this case.

Best Answer

Why not just do:

{ echo foo; echo bar;} > /tmp/p

If you want your controlling script to leave the pipe open, you can do:

exec 3<> /tmp/p

Opening a named pipe in read-write mode is to avoid that to block if the pipe hasn't been opened yet. That would instantiate it if it wasn't yet. It works on Linux at least but is not guaranteed to by POSIX.

Alternatively (and portably):

: < /tmp/p & exec 3> /tmp/p

Then instead of having each process open the named pipe, you can also do:

cmd >&3

And in the end, you'd do:

exec 3>&-

To close the writing end to let readers know it's finished.

change all <s to >s and <s to >s if you need the logic to be the other way round.

Related Question