As far as I know, there is no elegant way of doing this (for an inelegant way that works but ain't pretty, scroll down to the end of my answer). I doubt you can do any better than:
$ command >stdout.txt 2>stderr.txt && cat stdout.txt stderr.txt > both.txt
There are various cool tricks you can use but none of them seems to succeed in producing the 3 files any better than the above. The main problem is that the file both.txt
will not show the messages (STDERR and STDOUT) in the correct order. This is because (as explained here):
When you redirect both standard output and standard error to the same
file, you may get some unexpected results. This is due to the fact
that STDOUT is a buffered stream while STDERR is always unbuffered.
This means that every character of STDERR is written as soon as it is
available while STDOUT writes stuff in batches. When both STDOUT and
STDERR are going to the same file you may see error messages appear
sooner than you would have expected them in relation to the actual
output of your program or script. It isn’t anything to be alarmed
about but is simply a side-effect of buffered vs. unbuffered streams,
you just need to keep it in mind.
The best alternative I could find was using bash subshells, is kind of complex and still does not display the output in the correct order. I made a simple Perl script, test.pl
that prints "OUT" to STDOUT
and "ERR" to STDERR
, repeating the process 3 times:
#/usr/bin/perl
for($i=0; $i<=2; $i++){
print STDOUT "OUT\n";
print STDERR "ERR\n"
}
Its normal, un-redirected output is :
$ ./test.pl
OUT
ERR
OUT
ERR
OUT
ERR
To redirect output(s) I ran:
(./test.pl 2> >(tee error.txt) > >(tee out.txt)) > both.txt
This uses tee
, a program that will print its input to screen and to a file name. So, I am redirecting STDERR
and passing it as input to tee
, telling it to write it to the file error.txt
. Similarly with STDOUT
and the file out.txt
. I am placing the whole thing in a subshell ((...)
) so I can then capture all of its output and redirect to both.txt
.
Now, this works inasmuch as it creates 3 files, one with STDERR
, one with STDOUT
and one with both. However, as explained above, this results in the messages appearing in the incorrect order in both.txt
:
$ cat both.txt
ERR
ERR
ERR
OUT
OUT
OUT
The only way around this I could find was to append the time it was printed to each line of output and then sorting, but it is getting seriously convoluted and, in your place, I would ask myself if it is really worth it:
$(./test.pl \
2> >(while read n; do echo `date +%N`" $n"; echo "$n" >>error.txt; done) \
> >(while read n; do echo `date +%N`" $n"; echo "$n" >> out.txt; done )) \
| gawk '{print $2}'> both.txt
A process isn't "killed with SIGHUP" -- at least, not in the strict sense of the word. Rather, when the connection is dropped, the terminal's controlling process (in this case, Bash) is sent a hang-up signal*, which is commonly abbreviated the "HUP signal", or just SIGHUP.
Now, when a process receives a signal, it can handle it any way it wants**. The default for most signals (including HUP) is to exit immediately. However, the program is free to ignore the signal instead, or even to run some kind of signal handler function.
Bash chooses the last option. Its HUP signal handler checks to see if the "huponexit" option is true, and if so, sends SIGHUP to each of its child processes. Only once its finished with that does Bash exit.
Likewise, each child process is free to do whatever it wants when it receives the signal: leave it set to the default (i.e. die immediately), ignore it, or run a signal handler.
Nohup only changes the default action for the child process to "ignore". Once the child process is running, however, it's free change its own response to the signal.
This, I think, is why some programs die even though you ran them with nohup:
- Nohup sets the default action to "ignore".
- The program needs to do some kind of cleanup when it exits, so it installs a SIGHUP handler, incidentally overwriting the "ignore" flag.
- When the SIGHUP arrives, the handler runs, cleaning up the program's data files (or whatever needed to be done) and exits the program.
- The user doesn't know or care about the handler or cleanup, and just sees that the program exited despite nohup.
This is where "disown" comes in. A process that's been disowned by Bash is never sent the HUP signal, regardless of the huponexit option. So even if the program sets up its own signal handler, the signal is never actually sent, so the handler never runs. Note, however, that if the program tries to display some text to a user that's logged out, it will cause an I/O error, which could cause the program to exit anyway.
* And, yes, before you ask, the "hang-up" terminology is left over from UNIX's dialup mainframe days.
** Most signals, anyway. SIGKILL, for instance, always causes the program to terminate immediately, period.
Best Answer
How?
It opens
/dev/tty
. The relevant line fromstrace ssh …
is:The file descriptor
4
is then used withwrite(2)
andread(2)
.(Testbed: OpenSSH_7.9p1 Debian-10+deb10u2).
Why?
I'm not sure about "*nix idioms", whatever they are; but POSIX explicitly allows this:
(Emphasis mine).
Tools that need to interact with the user tend to use
/dev/tty
because it makes sense. Usually when users do this:they want it to be as similar as possible to this:
The only difference should be where the
tool
actually runs. Usually users wantssh
to be transparent. They don't want it to litterlocal_file_1
orlocal_file_2
and they don't want to wonder if they need to putyes
orno
in thelocal_file_0
in casessh
asks. Often one cannot predict ifssh
will ask in any particular case.Note when you run
ssh user@server tool
there's a shell involved on the remote side (compare this answer of mine). The shell can source some startup scripts that can litter the output. This is a different issue (and a reason the relevant startup scripts should be silent).Solutions
As stated above, your wish is rather unusual. This doesn't mean it's weird or totally uncommon. There are usage cases where one really wants this. The solution is to provide a pseudo-terminal you can control. The right tool is
expect(1)
. Not only it will provide a tty, but it will also allow you to implement some logic. You will be able to detect (and log)Are you sure you want to continue connecting
and answeryes
orno
; or nothing ifssh
doesn't ask.If you want to capture the whole output while interacting normally then consider
script(1)
.Broader picture
Up to this point we were interested in allocating tty on the client side, i.e. where
ssh
runs. In general you may want to run a tool that needs/dev/tty
on the server side. The SSH server is able to allocate a pseudo-terminal or not, the relevant options are-t
and-T
(seeman 1 ssh
). E.g. if you do this:then you will most likely see
sudo: no tty present …
. Provide a tty on the remote side and it will work:But there's a quirk. Without
-t
the default stdin, stdout and stderr of the remote command are connected to the stdin, stdout and stderr of the localssh
process. This means you can tell apart the remote stdout from the remote stderr locally. With-t
the default stdin, stdout and stderr (and/dev/tty
) of the remote command point to the same pseudo-terminal; now stdout, stderr and whatever the remote command writes to its/dev/tty
get combined into a single stream the localssh
prints to its (local) stdout. You cannot tell them apart locally. This command:will write prompt(s) from
sudo
to the file! By using/dev/tty
sudo
itself tries to be transparent when it comes to redirections, butssh -t
sabotages this.In this case it would be useful if
ssh
provided an option to allocate a pseudo-terminal (/dev/tty
) on the remote side and to connect it to/dev/tty
of the localssh
, while still connecting the default remote stdin, stdout and stderr to their local counterparts. Four separate channels (one of them bidirectional:/dev/tty
).AFAIK there is no such option. Currently you can have either three unidirectional channels (without
/dev/tty
for the remote process) or what appears as one bidirectional channel (/dev/tty
) for the remote process and two unidirectional channels (stdin and stdout of the localssh
) for the local user.Your original command:
does not specify a remote command, so it runs an interactive shell on the remote side and does provide a pseudo-terminal for it as if you used
-t
(unless there is no local terminal). This is the "one bidirectional channel for the remote process" case. It means that/tmp/err
can only get stderr from thessh
itself (e.g. if you usedssh -v
).An interactive shell with output not being printed to the (local) terminal cannot be easily used interactively. I hope this was only a minimal example (if not then maybe you need to rethink this).
Anyway you can see a situation with
/dev/tty
,ssh
and other tools that use/dev/tty
can be complicated.