As for a solution to redirect lots of command at once:
#!/bin/bash
{
somecommand
somecommand2
somecommand3
} 2>&1 | tee -a $DEBUGLOG
Why your original solution does not work: exec 2>&1 will redirect the standard error output to the standard output of your shell, which, if you run your script from the console, will be your console. the pipe redirection on commands will only redirect the standart output of the command.
On the point of view of somecommand
, its standard output goes into a pipe connected to tee
and the standard error goes into the same file/pseudofile as the standard error of the shell, which you redirect to the standard output of the shell, which will be the console if you run your program from the console.
The one true way to explain it is to see what really happens:
Your shell's original environment might look like this if you run it from the terminal:
stdin -> /dev/pts/42
stdout -> /dev/pts/42
stderr -> /dev/pts/42
After you redirect standard error into standard output (exec 2>&1
), you ... basically change nothing. But if you redirect the script's standart output to a file, you would end up with an environment like this:
stdin -> /dev/pts/42
stdout -> /your/file
stderr -> /dev/pts/42
Then redirecting the shell standard error into standard output would end up like this :
stdin -> /dev/pts/42
stdout -> /your/file
stderr -> /your/file
Running a command will inherit this environment. If you run a command and pipe it to tee, the command's environment would be :
stdin -> /dev/pts/42
stdout -> pipe:[4242]
stderr -> /your/file
So your command's standard error still goes into what the shell uses as its standard error.
You can actually see the environment of a command by looking in /proc/[pid]/fd
: use ls -l
to also list the symbolic link's content. The 0
file here is standard input, 1
is standard output and 2
is standard error. If the command opens more files (and most programs do), you will also see them. A program can also choose to redirect or close its standard input/output and reuse 0
, 1
and 2
.
This actually has nothing to do with the shell, it's a 'feature' of the mysql
command line utility.
Basically when mysql detects that the output isn't going to a terminal, it enables output buffering. This improves performance. However the program apparently sends the success output to STDOUT, and the error output to STDERR (makes sense really), and keeps a separate buffer for each.
The solution is simply to add -n
to the mysql command arguments. The -n
(or --unbuffered
) option disables output buffering.
For example:
mysql test -nvvf < import.txt >standard.txt 2>&1
Best Answer
If you redirect stdout for the rest of the script, then that would include the "cat" command, so you wouldn't see its output,
$file
could fill up indefinitely with recursive copies of itself (and would if it wouldn't be so small as to letcat
see the end of it before starting to write to it).To redirect stdout from now on, you simply do:
But here maybe you want:
You can also do things like:
(and by the way, the above is what modern shells do internally when you redirect a compound command like
{ some; code; } > ...
(though they use a fd above 10 and set the O_CLOEXEC flag on it so that executed commands don't see it), while the Bourne shell would fork a subshell in that case)Also note that in the code above, you don't have to restore stdout for the rest of the script, you can do it for the
cat
command only:(the
3>&-
(closing fd 3) is not necessary ascat
doesn't use its fd 3 anyway but is good practice in the general case, and strictly speaking, we'd have to add it to every command to emulate the O_CLOEXEC behavior of the shell).