First, note that the syntax for closing is 5>&-
or 6<&-
, depending on whether the file descriptor is being read for writing or for reading. There seems to be a typo or formatting glitch in that blog post.
Here's the commented script.
exec 5>/tmp/foo # open /tmp/foo for writing, on fd 5
exec 6</tmp/bar # open /tmp/bar for reading, on fd 6
cat <&6 | # call cat, with its standard input connected to
# what is currently fd 6, i.e., /tmp/bar
while read a; do #
echo $a >&5 # write to fd 5, i.e., /tmp/foo
done #
There's no closing here. Because all the inputs and outputs are going to the same place in this simple example, the use of extra file descriptors is not necessary. You could write
cat </tmp/bar |
while read a; do
echo $a
done >/tmp/foo
Using explicit file descriptors becomes useful when you want to write to multiple files in turn. For example, consider a script that outputs data to a data output file and logging data to a log file and possibly error messages as well. That means three output channels: one for data, one for logs and one for errors. Since there are only two standard descriptors for output, a third is needed. You can call exec
to open the output files:
exec >data-file
exec 3>log-file
echo "first line of data"
echo "this is a log line" >&3
…
if something_bad_happens; then echo error message >&2; fi
exec >&- # close the data output file
echo "output file closed" >&3
The remark about efficiency comes in when you have a redirection in a loop, like this (assume the file is empty to begin with):
while …; do echo $a >>/tmp/bar; done
At each iteration, the program opens /tmp/bar
, seeks to the end of the file, appends some data and closes the file. It is more efficient to open the file once and for all:
while …; do echo $a; done >/tmp/bar
When there are multiple redirections happening at different times, calling exec
to perform redirections rather than wrapping a block in a redirection becomes useful.
exec >/tmp/bar
while …; do echo $a; done
You'll find several other examples of redirection by browsing the io-redirection
tag on this site.
According to the kernel documentation, /proc/sys/fs/file-max
is the maximum, total, global number of file descriptors the kernel will allocate before choking. This is the kernel's limit, not your current user's. So you can open 590432, provided you're alone on an idle system (single-user mode, no daemons running).
Note that the documentation is out of date: the file has been /proc/sys/fs/file-max
for a long time. Thanks to Martin Jambon for pointing this out.
The difference between soft and hard limits is answered here, on SE. You can raise or lower a soft limit as an ordinary user, provided you don't overstep the hard limit. You can also lower a hard limit (but you can't raise it again for that process). As the superuser, you can raise and lower both hard and soft limits. The dual limit scheme is used to enforce system policies, but also allow ordinary users to set temporary limits for themselves and later change them.
Note that if you try to lower a hard limit below the soft limit (and you're not the superuser), you'll get EINVAL
back (Invalid Argument).
So, in your particular case, ulimit
(which is the same as ulimit -Sf
) says you don't have a soft limit on the size of files written by the shell and its subprocesses. (that's probably a good idea in most cases)
Your other invocation, ulimit -Hn
reports on the -n
limit (maximum number of open file descriptors), not the -f
limit, which is why the soft limit seems higher than the hard limit. If you enter ulimit -Hf
you'll also get unlimited
.
Best Answer
This code snippet opens
/dev/console
. The resulting file descriptor is the lowest-numbered file descriptor that isn't already open. If that number is at most 2, the loop is executed again. If that number is 3 or above, the descriptor is closed and the loop stops.When the loop finishes, file descriptors 0 to 2 (stdin, stdout and stderr) are guaranteed to be open. Either they were open before, and may be connected to any file, or they've just been opened, and they're connected to
/dev/console
.The choice of
/dev/console
is strange. I would have expected/dev/tty
, which is always the controlling terminal associated with the process group of the calling process. This is one of the few files that the POSIX standard requires to exist./dev/console
is the system console, which is where syslog messages sent to the console go; it isn't useful for a shell to care about this.