It's just that when stdout is not a terminal, output is buffered.
And when you press Ctrl-C, that buffer is lost as/if it has not been written yet.
You get the same behavior with anything using stdio
. Try for instance:
grep . > file
Enter a few non-empty lines and press Ctrl-C, and you'll see the file is empty.
On the other hand, type:
xinput test 10 > file
And type enough on the keyboard for the buffer to get full (at least 4k worth of ouput), and you'll see the size of file grow by chunks of 4k at a time.
With grep
, you can type Ctrl-D for grep
to exit gracefully after having flushed its buffer. For xinput
, I don't think there's such an option.
Note that by default stderr
is not buffered which explains why you get a different behaviour with fprintf(stderr)
If, in xinput.c
, you add a signal(SIGINT, exit)
, that is tell xinput
to exit gracefully when it receives SIGINT
, you'll see the file
is no longer empty (assuming it doesn't crash, as calling library functions from signal handlers isn't guaranteed safe: consider what could happen if the signal comes in while printf is writing to the buffer).
If it's available, you could use the stdbuf
command to alter the stdio
buffering behaviour:
stdbuf -oL xinput test 10 > file
There are many questions on this site that cover disabling stdio type buffering where you'll find even more alternative solutions.
With a recent bash, you can use process substitution.
foo 2> >(tee stderr.txt)
This just sends stderr to a program running tee.
More portably
exec 3>&1
foo 2>&1 >&3 | tee stderr.txt
This makes file descriptor 3 be a copy of the current stdout (i.e. the screen), then sets up the pipe and runs foo 2>&1 >&3
. This sends the stderr of foo to the same place as the current stdout, which is the pipe, then sends the stdout to fd 3, the original output. The pipe feeds the original stderr of foo to tee, which saves it in a file and sends it to the screen.
Best Answer
There are two output streams connected to each process on a Unix system: standard output (stdout, file-descriptor 1) and standard error (stderr, file-descriptor 2). These may be redirected independent of each other. Standard input uses file-descriptor 0.
file
, use>file
or the more explicit1>file
. Replacefile
by/dev/null
to discard the data.file
, use2>file
.2>&1
.1>&2
.There is no concept "the final result" of a stream or process. I suppose whatever is sent to standard output may be taken as the "result" of a process, unless it also outputs data to some file it opens by itself or has other side-effects (like unlinking a file from a directory, in the case of
rm
, or handling a number of network connections, in the case ofsshd
). A process also returns an exit status (zero for "success" and non-zero for "failure") which could be seen as "the result" of that process, but this is not necessarily related to the output streams of the process.Streams may also be redirected in append mode, in which means that if the redirection is to a file, that file will not initially be truncated, and any data on the stream will be appended to the end of the file. One does this by using
>>file
instead of>file
.In the note in the question, the command
is given. This redirects (discards) only standard error. The standard output stream is not redirected at all and will therefore be visible, in its entirety, in the console or terminal. If it was an intermediate part of a pipeline, the standard output stream would be fed into the standard input of the next command in the pipeline.
So to conclude, I'd say that there are two (not four) output streams. These may be redirected independently in various ways, which includes discarding their contents.