If I have two backgrounded processes that produce input to STDOUT or STDERR (e.g., two installation scripts), is there an easy way to make these two output streams distinguishable? I guess I can pipe each processes's output through a sed program that prefixes every line of each output with a different tag, but I'm looking for something easier.
Bash – A way to distinguish between interleaved output from two background processes
bashjob-controlshell
Related Solutions
The command does not hang. You think that the command is hanging because you don't see the prompt. The prompt is there. You don't see the prompt because it was pushed up by the output of the background process. Pressing enter after the long output of a background process causes the shell to "execute" the empty line and print a new prompt.
Try the following to convince yourself:
- execute
find . &
- wait until output done
- see blinking cursor or something but no prompt
- type
echo foo
- press enter
- see
foo
printed and a new prompt
More experiments:
seq 10 &
this will print the numbers 1 to 10 and then print a prompt.
seq 10000 &
this will print the numbers 1 to 10000 and then you have blinking cursor and no prompt. But the prompt is there. try echo foo
and press enter and your will see foo
printed and a new prompt.
(sleep 2; seq 10) &
this command emulates the waiting time of a command with long output but does not have a long output. On my system this has the following effect: first sleep 2
is executed in the background. moments later the shell prints the prompt. then, after 2 seconds, seq 10
is executed in the background. this will print ten lines and push the prompt up. then the background job is done.
So you see that the background job always finishes and you always have a prompt, you just don't always see the prompt. When the background job is done quickly then the shell prints the prompt at the end and you see the prompt. When the background job takes a while printing it's output then the shell has already printed a prompt but that prompt gets pushed up so you don't see it anymore.
Even more experiments:
Try seq 10000 &
or any other large number where you don't see a prompt at the end of the ouput. Now try half that number. In this example seq 5000 &
. Do you see a prompt? If you do then try a larger number. For example seq 7500 &
. If you don't see a prompt then try a smaller number. For example seq 2500 &
. Keep doing this until you have number where you see the prompt pushed just a few lines up. The number will vary from run to run because what we have here is practically a race condition between the background process and the shell process.
You might use coprocesses. Simple wrapper that feeds both outputs of a given command to two sed
instances (one for stderr
the other for stdout
), which do the tagging.
#!/bin/bash
exec 3>&1
coproc SEDo ( sed "s/^/STDOUT: /" >&3 )
exec 4>&2-
coproc SEDe ( sed "s/^/STDERR: /" >&4 )
eval $@ 2>&${SEDe[1]} 1>&${SEDo[1]}
eval exec "${SEDo[1]}>&-"
eval exec "${SEDe[1]}>&-"
Note several things:
It is a magic incantation for many people (including me) - for a reason (see the linked answer below).
There is no guarantee it won't occasionally swap couple of lines - it all depends on scheduling of the coprocesses. Actually, it is almost guaranteed that at some point in time it will. That said, if keeping the order strictly the same, you have to process the data from both
stderr
andstdin
in the same process, otherwise the kernel scheduler can (and will) make a mess of it.If I understand the problem correctly, it means that you would need to instruct the shell to redirect both streams to one process (which can be done AFAIK). The trouble starts when that process starts deciding what to act upon first - it would have to poll both data sources and at some point get into state where it would be processing one stream and data arrive to both streams before it finishes. And that is exactly where it breaks down. It also means, that wrapping the output syscalls like
stderred
is probably the only way to achieve your desired outcome (and even then you might have a problem once something becomes multithreaded on a multiprocessor system).
As far as coprocesses be sure to read Stéphane's excellent answer in How do you use the command coproc in Bash? for in depth insight.
Best Answer
The easiest solution would be to start each of the two background jobs and redirect their output to files:
This has the added benefit of not clogging up your terminal with output.
You may obviously redirect both the error and output streams to the same file too:
You could also use
tmux
:tmux
will exit as soon as all commands have exited. To avoid that, change"utility"
to"utility;read"
. This will make the pane stay open until you press Enter.