As for a solution to redirect lots of command at once:
#!/bin/bash
{
somecommand
somecommand2
somecommand3
} 2>&1 | tee -a $DEBUGLOG
Why your original solution does not work: exec 2>&1 will redirect the standard error output to the standard output of your shell, which, if you run your script from the console, will be your console. the pipe redirection on commands will only redirect the standart output of the command.
On the point of view of somecommand
, its standard output goes into a pipe connected to tee
and the standard error goes into the same file/pseudofile as the standard error of the shell, which you redirect to the standard output of the shell, which will be the console if you run your program from the console.
The one true way to explain it is to see what really happens:
Your shell's original environment might look like this if you run it from the terminal:
stdin -> /dev/pts/42
stdout -> /dev/pts/42
stderr -> /dev/pts/42
After you redirect standard error into standard output (exec 2>&1
), you ... basically change nothing. But if you redirect the script's standart output to a file, you would end up with an environment like this:
stdin -> /dev/pts/42
stdout -> /your/file
stderr -> /dev/pts/42
Then redirecting the shell standard error into standard output would end up like this :
stdin -> /dev/pts/42
stdout -> /your/file
stderr -> /your/file
Running a command will inherit this environment. If you run a command and pipe it to tee, the command's environment would be :
stdin -> /dev/pts/42
stdout -> pipe:[4242]
stderr -> /your/file
So your command's standard error still goes into what the shell uses as its standard error.
You can actually see the environment of a command by looking in /proc/[pid]/fd
: use ls -l
to also list the symbolic link's content. The 0
file here is standard input, 1
is standard output and 2
is standard error. If the command opens more files (and most programs do), you will also see them. A program can also choose to redirect or close its standard input/output and reuse 0
, 1
and 2
.
You might use coprocesses. Simple wrapper that feeds both outputs of a given command to two sed
instances (one for stderr
the other for stdout
), which do the tagging.
#!/bin/bash
exec 3>&1
coproc SEDo ( sed "s/^/STDOUT: /" >&3 )
exec 4>&2-
coproc SEDe ( sed "s/^/STDERR: /" >&4 )
eval $@ 2>&${SEDe[1]} 1>&${SEDo[1]}
eval exec "${SEDo[1]}>&-"
eval exec "${SEDe[1]}>&-"
Note several things:
It is a magic incantation for many people (including me) - for a reason (see the linked answer below).
There is no guarantee it won't occasionally swap couple of lines - it all depends on scheduling of the coprocesses. Actually, it is almost guaranteed that at some point in time it will. That said, if keeping the order strictly the same, you have to process the data from both stderr
and stdin
in the same process, otherwise the kernel scheduler can (and will) make a mess of it.
If I understand the problem correctly, it means that you would need to instruct the shell to redirect both streams to one process (which can be done AFAIK). The trouble starts when that process starts deciding what to act upon first - it would have to poll both data sources and at some point get into state where it would be processing one stream and data arrive to both streams before it finishes. And that is exactly where it breaks down. It also means, that wrapping the output syscalls like stderred
is probably the only way to achieve your desired outcome (and even then you might have a problem once something becomes multithreaded on a multiprocessor system).
As far as coprocesses be sure to read Stéphane's excellent answer in How do you use the command coproc in Bash? for in depth insight.
Best Answer
I think this will do what you want:
test.pl 2>&1 >all.txt | add_timestamps | tee -a all.txt > errors.txt
all.txt
add_timestamps
tee
to append errors (with timestamps) toall.txt
errors.txt