Bash – Segmentation fault output is suppressed when piping stdin into a function. Why

bashcommand line

Let's define a function to execute a binary:

function execute() { ./binary; }

Then define a second function to pipe a text file into the first function:

function test() { cat in.txt | execute; }

If binary crashes with a segfault, then calling test from the CLI will result in a 139 return code, but the error – "Segmentation fault" – will not be printed to the terminal.

"Segmentation fault" does get printed if we define test to call binary directly:

function test() { cat in.txt | ./binary; }

It also gets printed if we define to call execute without piping stdin into it:

function test() { execute; }

Finally, it also gets printed if we redirect in.txt into execute directly instead of through a pipe:

function test() { execute <in.txt; }

This was tested on Bash 4.4. Why is that?

Best Answer

This diagnostic message is generated by the interactive shell's job control system, for the benefit of the user - it's not from the underlying program that crashed. When you pipe into a shell function a subshell is spawned to run the function, and this subshell is not treated as user-facing. If you call the function normally, it runs within the original shell, and the message is printed.

You can test this out by disabling job control in your current shell

set +m

and then running ./binary again: now it won't print anything there either. Re-enable job control with set -m.

Even a bare subshell has the same effect:

( : ; ./binary )

will print no diagnostic (two commands are required in there to avoid a subshell-eliding optimisation). Piping out of the function does it too.

Job control is disabled in the subshell, and even with it enabled manually, it's silenced. This is an unfortunate gap in the system. In a non-interactive shell the message would always be reported through a different mechanism, and anywhere else in an interactive shell it would as well.


If printing the diagnostic is important to you, making a script instead of a function will allow you to make sure it's always included. Since you're using the function in a pipeline, you can't do anything that requires a function anyway, so there's not a major cost to doing so.


I wouldn't go quite as far as to say this is a bug. One possible reason to behave in this way is to make command substitution $(...), which also runs a subshell, behave appropriately:

foo=$(echo|test)

shouldn't result in the diagnostic message being stored in foo, so that pipeline failures result in empty expansions. Another is as a way to temporarily suppress the diagnostic messages deliberately.

Related Question