This would be hacky, but if it's a dynamically linked executable, you could set up a global preload in /etc/ld.so.preload
which would only trigger a logging hook if it detected you were in the right executable.
Something like:
#define _XOPEN_SOURCE
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#define TARGET "/some_executable"
__attribute__((constructor))
static void
logger(int argc, char** argv){
/*catch own argv right here and parent's later from /proc */
static char buf[sizeof(TARGET)];
readlink("/proc/self/exe", buf, sizeof(buf)-1);
if ( 0==strcmp(TARGET, buf)){
/* ... */
syslog(/*...*/);
}
}
The obvious disadvantage of this approach is it would slightly delay the execution of each dynamically linked executable on your system, but my measurements indicate the delay is quite small (<1ms where fork+exec costs about 2ms).
As for the dropped permission problem, you could have a small setuid-root binary that will unconditionally read and echo its grandparents proc files (the status
file, most likely), possibly if and only if its parent is the executable whose parents you want to log. You could then spawn that setuid executable inside your logging hook to obtain the info on the executables parent (grandparent of the setuid helper).
Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of its own contents rather than, say, the contents of some other file?
You could use any file, as long as all copies of the script use the same one.
Using $0
just ties the lock to the script itself: If you copy the script and modify it for some other use, you don't need to come up with a new name for the lock file. This is convenient.
If the script is called through a symlink, the lock is on the actual file, and not the link.
(Of course, if some process runs the script and gives it a made up value as the zeroth argument instead of the actual path, then this breaks. But that's rarely done.)
(I tried using a different file and re-running as above, and the execution order changed)
Are you sure that was because of the file used, and not just random variation? As with a pipeline, there's really no way to be sure in what order the commands get to run in cmd1 & cmd
. It's mostly up to the OS scheduler. I get random variation on my system.
Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of a file's contents, anyway?
It looks like that's so that the shell itself holds a copy of the file description holding the lock, instead of just the flock
utility holding it. A lock made with flock(2)
is released when the file descriptors having it are closed.
flock
has two modes, either to take a lock based on a file name, and run an external command (in which case flock
holds the required open file descriptor), or to take a file descriptor from the outside, so an outside process is responsible for holding it.
Note that the contents of the file are not relevant here, and there are no copies made. The redirection to the subshell doesn't copy any data around in itself, it just opens a handle to the file.
Why does holding an exclusive lock on file descriptor 0 in one shell prevent a copy of the same script, running in a different shell, from getting an exclusive lock on file descriptor 0? Don't shells have their own, separate copies of the standard file descriptors (0, 1, and 2, i.e. STDIN, STDOUT, and STDERR)?
Yes, but the lock is on the file, not the file descriptor. Only one opened instance of the file can hold the lock at a time.
I think you should be able to do the same without the subshell, by using exec
to open a handle to the lock file:
$ cat lock.sh
#!/bin/sh
exec 9< "$0"
if ! flock -n -x 9; then
echo "$$/$1 cannot get flock"
exit 0
fi
echo "$$/$1 got the lock"
sleep 2
echo "$$/$1 exit"
$ ./lock.sh bg & ./lock.sh fg ; wait; echo
[1] 11362
11363/fg got the lock
11362/bg cannot get flock
11363/fg exit
[1]+ Done ./lock.sh bg
Best Answer
There's the standard utilities
true
andfalse
. The first does nothing but return an exit status of 0 for successful execution, the second does nothing but return a non-zero value indicating a non-successful result(*). You probably want the first one.Though some systems that really want you to enter some text (commit messages, etc.) will check if the "edited" file was actually modified, and just running
true
wouldn't fly in that case. Instead,touch
might work; it updates the timestamps of any files it gets as arguments.However, if the editor gets any other arguments than the filename
touch
would create those as files. Many editors support an argument like+NNN
to tell the initial line to put the cursor in, and so the editor may be called as$EDITOR +123 filename.txt
. (E.g.less
does this,git
doesn't seem to.)Note that you'll want to use
true
, not e.g./bin/true
. First, if there's a shell involved, specifying the command without a path will allow the shell to use a builtin implementation, and if a shell is not used, the binary file will be found inPATH
anyway. Second, not all systems have/bin/true
; e.g. on macOS, it's/usr/bin/true
. (Thanks @jpaugh.)(* or as the GNU man page puts it, false "[does] nothing, unsuccessfully". Thanks @8bittree.)