When you execute a pipeline, each pipe-separated element is executed in its own process. Variable assignments only take effect in their own process. Under ksh and zsh, the last element of the pipeline is executed in the original shell; under other shells such as bash, each pipeline element is executed in its own subshell and the original shell just waits for them all to end.
$ bash -c 'GROUPSTATUS=foo; echo GROUPSTATUS is $GROUPSTATUS'
GROUPSTATUS is foo
$ bash -c 'GROUPSTATUS=foo | :; echo GROUPSTATUS is $GROUPSTATUS'
GROUPSTATUS is
In your case, since you only care about all the commands succeeding, you can make the status code flow up.
{ tar -cf - my_folder 2>&1 1>&3 | grep -v "Removing leading" 1>&2;
! ((PIPESTATUS[0])); } 3>&1 |
gzip --rsyncable > my_file.tar.gz;
if ((PIPESTATUS[0] || PIPESTATUS[1])); then rm my_file.tar.gz; fi
If you want to get more than 8 bits of information out of the left side of a pipe, you can write to yet another file descriptor. Here's a proof-of-principle example:
{ { tar …; echo $? >&4; } | …; } | { gzip …; echo $? >&4; } \
4>&1 | ! grep -vxc '0'
Once you get data on standard output, you can feed it into a shell variable using command substitution, i.e. $(…)
. Command substitution reads from the command's standard output, so if you also meant to print things to script's standard output, they need to temporarily go through another file descriptor. The following snippet uses fd 3 for things that eventually go to the script's stdout and fd 4 for things that are captured into $statuses
.
statuses=$({ { tar -v … >&3; echo tar $? >&4; } | …; } |
{ gzip …; echo gzip $? >&4; } 4>&1) 3>&1
If you need to capture the output from different commands into different variables, I think there is no direct way even in “advanced” shells such as bash, ksh or zsh. Here are some workarounds:
- Use temporary files.
- Use a single output stream, with e.g. a prefix on each line to indicate its origin, and filter at the top level.
- Use a more advanced language such as Perl or Python.
When instructed to echo commands as they are executed ("execution trace"), both bash
and ksh
add single quotes around any word with meta-characters (*
, ?
, ;
, etc.) in it.
The meta-characters could have gotten into the word in a variety of ways. The word (or part of it) could have been quoted with single or double quotes, the characters could have been escaped with a \
, or they remained as the result of a failed filename matching attempt. In all cases, the execution trace will contain single-quoted words, for example:
$ set -x
$ echo foo\;bar
+ echo 'foo;bar'
This is just an artifact of the way the shells implement the execution trace; it doesn't alter the way the arguments are ultimately passed to the command. The quotes are added, printed, and discarded. Here is the relevant part of the bash
source code, print_cmd.c
:
/* A function to print the words of a simple command when set -x is on. */
void
xtrace_print_word_list (list, xtflags)
...
{
...
for (w = list; w; w = w->next)
{
t = w->word->word;
...
else if (sh_contains_shell_metas (t))
{
x = sh_single_quote (t);
fprintf (xtrace_fp, "%s%s", x, w->next ? " " : "");
free (x);
}
As to why the authors chose to do this, the code there doesn't say. But here's some similar code in variables.c
, and it comes with a comment:
/* Print the value cell of VAR, a shell variable. Do not print
the name, nor leading/trailing newline. If QUOTE is non-zero,
and the value contains shell metacharacters, quote the value
in such a way that it can be read back in. */
void
print_var_value (var, quote)
...
{
...
else if (quote && sh_contains_shell_metas (value_cell (var)))
{
t = sh_single_quote (value_cell (var));
printf ("%s", t);
free (t);
}
So possibly it's done so that it's easier to copy the command lines from the output of the execution trace and run them again.
Best Answer
Here's an example:
So, we have a file called
file
that contains the stringfoo
. When I rangrep
with-q
onfile
and the nonexistentwrongfile
, sincefile
contained a match,grep
exited with0
status despite the "No such file" error.