There is no standard way to retrieve the list of configuration variables that are supported on a system. If you program for a given POSIX version, the list in that version of the POSIX specification is your reference list. On Linux, getconf -a
lists all available variable.
fpathconf
isn't specific to PATH. It's about variables that are related to files, which are the ones that may vary from file to file.
Regarding ARG_MAX
on Linux, the rationale for depending on the stack size is that the arguments end up on the stack, so there had better be enough room for them plus everything else that must fit. Most other implementations (including older versions of Linux) have a fixed size.
Most limits go together with resource availability, with different resources depending on the limit. For example, a process may be unable to open a file even if it has fewer than OPEN_MAX
files open, if the system is out of memory that can be used for the file-related data.
Linux is POSIX-compliant on this point by default, so I don't know where you're getting at.
If you use ulimit -s
to restrict the stack size to less than ARG_MAX
, you're making the system no longer compliant. A POSIX system can typically be made non-compliant in any number of ways, including PATH=/nowhere
(making all standard utilities unavailable) or rm -rf /
.
The value of ARG_MAX
in limits.h
provides a minimum that applications can rely on. A POSIX-compliant system is allowed to let execve
succeed even if the arguments exceed that size. The guarantee related to ARG_MAX
is that if the arguments fit in that size then execve
will not fail due E2BIG
.
Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of its own contents rather than, say, the contents of some other file?
You could use any file, as long as all copies of the script use the same one.
Using $0
just ties the lock to the script itself: If you copy the script and modify it for some other use, you don't need to come up with a new name for the lock file. This is convenient.
If the script is called through a symlink, the lock is on the actual file, and not the link.
(Of course, if some process runs the script and gives it a made up value as the zeroth argument instead of the actual path, then this breaks. But that's rarely done.)
(I tried using a different file and re-running as above, and the execution order changed)
Are you sure that was because of the file used, and not just random variation? As with a pipeline, there's really no way to be sure in what order the commands get to run in cmd1 & cmd
. It's mostly up to the OS scheduler. I get random variation on my system.
Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of a file's contents, anyway?
It looks like that's so that the shell itself holds a copy of the file description holding the lock, instead of just the flock
utility holding it. A lock made with flock(2)
is released when the file descriptors having it are closed.
flock
has two modes, either to take a lock based on a file name, and run an external command (in which case flock
holds the required open file descriptor), or to take a file descriptor from the outside, so an outside process is responsible for holding it.
Note that the contents of the file are not relevant here, and there are no copies made. The redirection to the subshell doesn't copy any data around in itself, it just opens a handle to the file.
Why does holding an exclusive lock on file descriptor 0 in one shell prevent a copy of the same script, running in a different shell, from getting an exclusive lock on file descriptor 0? Don't shells have their own, separate copies of the standard file descriptors (0, 1, and 2, i.e. STDIN, STDOUT, and STDERR)?
Yes, but the lock is on the file, not the file descriptor. Only one opened instance of the file can hold the lock at a time.
I think you should be able to do the same without the subshell, by using exec
to open a handle to the lock file:
$ cat lock.sh
#!/bin/sh
exec 9< "$0"
if ! flock -n -x 9; then
echo "$$/$1 cannot get flock"
exit 0
fi
echo "$$/$1 got the lock"
sleep 2
echo "$$/$1 exit"
$ ./lock.sh bg & ./lock.sh fg ; wait; echo
[1] 11362
11363/fg got the lock
11362/bg cannot get flock
11363/fg exit
[1]+ Done ./lock.sh bg
Best Answer
When you use process substitution with
<(...)
or>(...)
, bash will open a pipe to the other program on an arbitrary high file descriptor (I think it used to count up from 10, but now it counts down from 63) and pass the name as /dev/fd/N on the command line of the first program. This isn't POSIX, but other shells also support it (it's a ksh88 feature).That's not exactly a feature of the program you're running though, it just sees /dev/fd/N and tries to open it like a regular file.
The Autoconf manual mentions some historic notes:
Also while I did a google search for this I found a program called runit that uses file descriptors 4 and 5 for some purpose related to log rotation.