It doesn't matter because both 4>&1
and 4<&1
do the same thing: dup2(1, 4)
which is the system call to duplicate a fd onto another. The duplicated fd automatically inherits the I/O direction of the original fd. (same for 4>&-
vs 4<&-
which both resolve to close(4)
, and 4>&1-
which is the dup2(1, 4)
followed by close(1)
).
However, the 4<&1
syntax is confusing unless for some reason the fd 1 was explicitly open for reading (which would be even more confusing), so in my mind should be avoided.
The duplicated fd
shares the same open file description which means they share the same offset in the file (for those file types where it makes sense) and same associated flags (I/O redirection/opening mode, O_APPEND and so on).
On Linux, there's another way to duplicate a fd
(which is not really a duplication) and create a new open file description for the same resource but with possibly different flags.
exec 3> /dev/fd/4
While on Solaris and probably most other Unices, that is more or less equivalent to dup2(4, 3)
, on Linux, that opens the same resource as that pointed to by the fd 4 from scratch.
That is an important difference, because for instance, for a regular file, the offset of fd 3 will be 0 (the beginning of the file) and the file will be truncated (which is why for instance on Linux you need to write tee -a /dev/stderr
instead of tee /dev/stderr
).
And the I/O mode can be different.
Interestingly, if fd 4 pointed to the reading end of a pipe, then fd 3 now points to the writing end (/dev/fd/3
behaves like a named pipe):
$ echo a+a | { echo a-a > /dev/fd/0; tr a b; }
b+b
b-b
$ echo a+a | { echo a-a >&0; tr a b; }
bash: echo: write error: Bad file descriptor
b+b
It's just like running two processes to write to the same file at the same time...bad idea. You wind up with two different open file handles and your data can get garbled (as it does in #3 above). Using syntax #2 is correct; it makes one file handle and points both stderr and stdout to the same place.
As for stderr always being printed first, there is no rule on this whatsoever. I suspect with ls
it is because ls
needs to check every entry in the directory before it can actually state that a particular file doesn't exist. So rather than make N passes over the directory table, it makes a single pass, checking for all command line arguments given, reports the errors, and prints the files it found. Other commands may print to stderr after stdout, or even alternate between them.
Best Answer
To append text to a file you use
>>
. To overwrite the data currently in that file, you use>
. In general, in bash and other shells, you escape special characters using\
.So, when you use
echo foo >\>
what you are saying is "redirect to a file called>
", but that is because you are escaping the second>
. It is equivalent to usingecho foo > \>
which is the same asecho foo > '>'
.So, yes, as Sirex said, that is likely a typo in your book.