Is >&- More Efficient Than >/dev/null? – Shell Scripting Insights

file-descriptorsshell

Yesterday I read this SO comment which says that in the shell (at least bash) >&- "has the same result as" >/dev/null.

That comment actually refers to the ABS guide as the source of its information. But that source says that the >&- syntax "closes file descriptors".

It is not clear to me whether the two actions of closing a file descriptor and redirecting it to the null device are totally equivalent. So my question is: are they?

On the surface of it it seems that closing a descriptor is like closing a door but redirecting it to a null device is opening a door to limbo! The two don't seem exactly the same to me because if I see a closed door, I won't try to throw anything out of it, but if I see an open door I will assume I can.

In other words, I have always wondered if >/dev/null means that cat mybigfile >/dev/null would actually process every byte of the file and write it to /dev/null which forgets it. On the other hand, if the shell encounters a closed file descriptor I tend to think (but am not sure) that it will simply not write anything, though the question remains whether cat will still read every byte.

This comment says >&- and >/dev/null "should" be the same, but it is not so resounding answer to me. I'd like to have a more authoritative answer with some reference to standard or source core or not…

Best Answer

No, you certainly don't want to close file descriptors 0, 1 and 2.

If you do so, the first time the application opens a file, it will become stdin/stdout/stderr...

For instance, if you do:

echo text | tee file >&-

When tee (at least some implementations, like busybox') opens the file for writing, it will be open on file descriptor 1 (stdout). So tee will write text twice into file:

$ echo text | strace tee file >&-
[...]
open("file", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 1
read(0, "text\n", 8193)                 = 5
write(1, "text\n", 5)                   = 5
write(1, "text\n", 5)                   = 5
read(0, "", 8193)                       = 0
exit_group(0)                           = ?

That has been known to cause security vulnerabilities. For instance:

chsh 2>&-

And chsh (a setuid application) may end up writing error messages in /etc/passwd.

Some tools and even some libraries try to guard against that. For instance GNU tee will move the file descriptor to one above 2 if the files it opens for writing are assigned 0, 1, 2 while busybox tee won't.

Most tools, if they can't write to stdout (because for instance it's not open), will report an error message on stderr (in the language of the user which means extra processing to open and parse localisation files...), so it will be significantly less efficient, and possibly cause the program to fail.

In any case, it won't be more efficient. The program will still do a write() system call. It can only be more efficient if the program gives up writing to stdout/stderr after the first failing write() system call, but programs generally don't do that. They generally either exit with an error or keep on trying.