3>&4-
is a ksh93 extension also supported by bash and that is short for 3>&4 4>&-
, that is 3 now points to where 4 used to, and 4 is now closed, so what was pointed to by 4 has now moved to 3.
Typical usage would be in cases where you've duplicated stdin
or stdout
to save a copy of it and want to restore it, like in:
Suppose you want to capture the stderr of a command (and stderr only) while leaving stdout alone in a variable.
Command substitution var=$(cmd)
, creates a pipe. The writing end of the pipe becomes cmd
's stdout (file descriptor 1) and the other end is read by the shell to fill up the variable.
Now, if you want stderr
to go to the variable, you could do: var=$(cmd 2>&1)
. Now both fd 1 (stdout) and 2 (stderr) go to the pipe (and eventually to the variable), which is only half of what we want.
If we do var=$(cmd 2>&1-)
(short for var=$(cmd 2>&1 >&-
), now only cmd
's stderr goes to the pipe, but fd 1 is closed. If cmd
tries to write any output, that would return with a EBADF
error, if it opens a file, it will get the first free fd and the open file will be assigned it to stdout
unless the command guards against that! Not what we want either.
If we want the stdout of cmd
to be left alone, that is to point to the same resource that it pointed to outside the command substitution, then we need somehow to bring that resource inside the command substitution. For that we can do a copy of stdout
outside the command substitution to take it inside.
{
var=$(cmd)
} 3>&1
Which is a cleaner way to write:
exec 3>&1
var=$(cmd)
exec 3>&-
(which also has the benefit of restoring fd 3 instead of closing it in the end).
Then upon the {
(or the exec 3>&1
) and up to the }
, both fd 1 and 3 point to the same resource fd 1 pointed to initially. fd 3 will also point to that resource inside the command substitution (command substitution only redirects the fd 1, stdout). So above, for cmd
, we've got for fds 1, 2, 3:
- the pipe to var
- untouched
- same as what 1 points to outside the command substitution
If we change it to:
{
var=$(cmd 2>&1 >&3)
} 3>&1-
Then it becomes:
- same as what 1 points to outside the command substitution
- the pipe to var
- same as what 1 points to outside the command substitution
Now, we've got what we wanted: stderr goes to the pipe and stdout is left untouched. However, we're leaking that fd 3 to cmd
.
While commands (by convention) assume fds 0 to 2 to be open and be standard input, output and error, they don't assume anything of other fds. Most likely they will leave that fd 3 untouched. If they need another file descriptor, they'll just do an open()/dup()/socket()...
which will return the first available file descriptor. If (like a shell script that does exec 3>&1
) they need to use that fd
specifically, they will first assign it to something (and in that process, the resource held by our fd 3 will be released by that process).
It's good practice to close that fd 3 since cmd
doesn't make use of it, but it's no big deal if we leave it assigned before we call cmd
. The problems may be: that cmd
(and potentially other processes that it spawns) has one fewer fd available to it. A potentially more serious problem is if the resource that that fd points to may end up held by a process spawned by that cmd
in background. It can be a concern if that resource is a pipe or other inter-process communication channel (like when your script is being run as script_output=$(your-script)
), as that will mean the process reading from the other end will never see end-of-file until that background process terminates.
So here, it's better to write:
{
var=$(cmd 2>&1 >&3 3>&-)
} 3>&1
Which, with bash
can be shorten to:
{
var=$(cmd 2>&1 >&3-)
} 3>&1
To sum up the reasons why it's rarely used:
- it's non-standard and just syntactic sugar. You've got to balance saving a few keystrokes with making your script less portable and less obvious to people not used to that uncommon feature.
- The need to close the original fd after duplicating it is often overlooked because most of the time, we don't suffer from the consequence, so we just do
>&3
instead of >&3-
or >&3 3>&-
.
Proof that it's rarely used, as you found out is that it is bogus in bash. In bash compound-command 3>&4-
or any-builtin 3>&4-
leaves fd 4 closed even after compound-command
or any-builtin
has returned. A patch to fix the issue is now (2013-02-19) available.
Inside $(...)
, stdout (fd 1) is a pipe. At the other end of the pipe, the shell reads the output and stores it into the $result
variable.
With: $({ blah; } 3>&1)
we make it that both fd 3 and 1 point to that pipe in blah
.
blah
is cmd1 | cmd2
. There cmd1
and cmd2
are started concurrently with cmd1
fd 1 pointing to another pipe (the one to cmd2
), however we don't want ssh
output to go to that pipe, we want it to go to the first pipe so that it can be stored in $result
.
So we have { ssh >&3; echo "$?"; } | cmd2
, so that only the echo
output goes to the pipe to cmd2
. ssh
output (fd 1) goes to $result
. ssh
not needing a fd 3, we close it for it after we've used it to set fd 1 (3>&-
).
cmd2
's input (fd 0) is the second pipe. Nothing is writing to that pipe (since ssh
is writing to the first pipe) until ssh
terminates and echo
outputs the exit status there.
So in cmd2
(being the until
loop), the read -t1
is actually waiting with timeout until ssh
exits. After which, read
will return successfully with the content of $?
fed by echo
.
Best Answer
The check is easy to do in C with either
read(fd, 0, 0)
or(fcntl(fd, F_GETFL) & O_WRONLY) == 0
. I wasn't able to trick any standard utility into doing just that, so here are some possible workarounds.On linux, you can use
/proc/PID/fdinfo/FD
:On OpenBSD and NetBSD, you can use
/dev/fd/FD
anddd
with a zero count:On FreeBSD, only the first 3 fds are provided by default in
/dev/fd
; you should either mountfdescfs(5)
on/dev/fd
or:Notes:
On some systems,
bash
does its emulation of/dev/fd/FD
, and socat </dev/fd/7
may work completely different fromcat /dev/fd/7
. Same caveat applies togawk
.A
read(2)
with length 0 (or anopen(2)
withoutO_TRUNC
in its flags) will not update the access time or any other timestamps.On linux, a
read(2)
will always fail on a directory, even if it was opened without theO_DIRECTORY
flag. On other Unix systems, a directory may be read just like another file.The standard leaves unspecified whether
dd count=0
will copy no blocks or all blocks from the file: the former is the behaviour of GNU dd (gdd
) and of thedd
from *BSD.