In a pipeline, all commands run concurrently (with their stdout/stdin connected by pipes) so in different processes.
In
cmd1 | cmd2 | cmd3
All three commands run in different processes, so at least two of them have to run in a child process. Some shells run one of them in the current shell process (if builtin like read
or if the pipeline is the last command of the script), but bash
runs them all in their own separate process (except with the lastpipe
option in recent bash
versions and under some specific conditions).
{...}
groups commands. If that group is part of a pipeline, it has to run in a separate process just like a simple command.
In:
{ a; b "$?"; } | c
We need a shell to evaluate that a; b "$?"
is a separate process, so we need a subshell. The shell could optimise by not forking for b
since it's the last command to be run in that group. Some shells do it, but apparently not bash
.
bash
shouldn't print the job status when non-interactive.
If that's indeed for an interactive bash
, you can do:
{ pid=$(sleep 20 >&3 3>&- & echo "$!"); } 3>&1
We want sleep
's stdout to go to where it was before, not the pipe that feeds the $pid
variable. So we save the outer stdout in the file descriptor 3 (3>&1
) and restore it for sleep
inside the command substitution. So pid=$(...)
returns as soon as echo
terminates because there's nothing left with an open file descriptor to the pipe that feeds $pid
.
However note that because it's started from a subshell (here in a command substitution), that sleep
will not run in a separate process group. So it's not the same as running sleep 20 &
with regards to I/O to the terminal for instance.
Maybe better would be to use a shell that supports spawning disowned background jobs like zsh
where you can do:
sleep 20 &! pid=$!
With bash
, you can approximate it with:
{ sleep 20 2>&3 3>&- & } 3>&2 2> /dev/null; pid=$!; disown "$pid"
bash
outputs the [1] 21578
to stderr. So again, we save stderr before redirecting to /dev/null, and restore it for the sleep
command. That way, the [1] 21578
goes to /dev/null
but sleep
's stderr goes as usual.
If you're going to redirect everything to /dev/null anyway, you can simply do:
{ apt-get update & } > /dev/null 2>&1; pid=$!; disown "$pid"
To redirect only stdout:
{ apt-get-update 2>&3 3>&- & } 3>&2 > /dev/null 2>&1; pid=$!; disown "$pid"
Best Answer
$(…)
is a subshell by definition: it's a copy of the shell runtime state¹, and changes to the state made in the subshell have no impact on the parent. A subshell is typically implemented by forking a new process (but some shells may optimize this in some cases).It isn't a subshell that you can retrieve variable values from. If changes to variables had an impact on the parent, it wouldn't be a subshell. It's a subshell whose output the parent can retrieve. The subshell created by
$(…)
has its standard output set to a pipe, and the parent reads from that pipe and collects the output.There are several other constructs that create a subshell. I think this is the full list for bash:
( … )
does nothing but create a subshell and wait for it to terminate). Contrast with{ … }
which groups commands purely for syntactic purposes and does not create a subshell.… &
creates a subshell and does not wait for it to terminate.… | …
creates two subshells, one for the left-hand side and one for the right-hand side, and waits for both to terminate. The shell creates a pipe and connects the left-hand side's standard output to the write end of the pipe and the right-hand side's standard input to the read end. In some shells (ksh88, ksh93, zsh, bash with thelastpipe
option set and effective), the right-hand side runs in the original shell, so the pipeline construct only creates one subshell.$(…)
(also spelled`…`
) creates a subshell with its standard output set to a pipe, collects the output in the parent and expands to that output, minus its trailing newlines. (And the output may be further subject to splitting and globbing, but that's another story.)<(…)
creates a subshell with its standard output set to a pipe and expands to the name of the pipe. The parent (or some other process) may open the pipe to communicate with the subshell.>(…)
does the same but with the pipe on standard input.coproc …
creates a subshell and does not wait for it to terminate. The subshell's standard input and output are each set to a pipe with the parent being connected to the other end of each pipe.¹ As opposed to running a separate shell.