Bash – How to Propagate Errors in Process Substitution

bashprocess-substitution

I want my shell scripts to fail whenever a command executed with them fails.

Typically I do that with:

set -e
set -o pipefail

(typically I add set -u also)

The thing is that none of the above works with process substitution. This code prints "ok" and exit with return code = 0, while I would like it to fail:

#!/bin/bash -e
set -o pipefail
cat <(false) <(echo ok)

Is there anything equivalent to "pipefail" but for process substitution? Any other way to passing to a command the output of commands as it they were files, but raising an error whenever any of those programs fails?

A poor's man solution would be detecting if those commands write to stderr (but some commands write to stderr in sucessful scenarios).

Another more posix compliant solution would be using named pipes, but I need to lauch those commands-that-use-process-substitution as oneliners built on the fly from compiled code, and creating named pipes would complicate things (extra commands, trapping error for deleting them, etc.)

Best Answer

You could only work around that issue with that for example:

cat <(false || kill $$) <(echo ok)
other_command

The subshell of the script is SIGTERMd before the second command can be executed (other_command). The echo ok command is executed "sometimes": The problem is that process substitutions are asynchronous. There's no guarantee that the kill $$ command is executed before or after the echo ok command. It's a matter of the operating systems scheduling.

Consider a bash script like this:

#!/bin/bash
set -e
set -o pipefail
cat <(echo pre) <(false || kill $$) <(echo post)
echo "you will never see this"

The output of that script can be:

$ ./script
Terminated
$ echo $?
143           # it's 128 + 15 (signal number of SIGTERM)

Or:

$ ./script
Terminated
$ pre
post

$ echo $?
143

You can try it and after a few tries, you will see the two different orders in the output. In the first one the script was terminated before the other two echo commands could write to the file descriptor. In the second one the false or the kill command were probably scheduled after the echo commands.

Or to be more precisely: The system call signal() of the kill utillity that sends the the SIGTERM signal to the shells process was scheduled (or was delivered) later or earlier than the echo write() syscalls.

But however, the script stops and the exit code is not 0. It should therefore solve your issue.

Another solution is, of course, to use named pipes for this. But, it depends on your script how complex it would be to implement named pipes or the workaround above.

References: