Here's another way to do locking in shell script that can prevent the race condition you describe above, where two jobs may both pass line 3. The noclobber
option will work in ksh and bash. Don't use set noclobber
because you shouldn't be scripting in csh/tcsh. ;)
lockfile=/var/tmp/mylock
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
# do stuff here
# clean up after yourself, and release your trap
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Lock Exists: $lockfile owned by $(cat $lockfile)"
fi
YMMV with locking on NFS (you know, when NFS servers are not reachable), but in general it's much more robust than it used to be. (10 years ago)
If you have cron jobs that do the same thing at the same time, from multiple servers, but you only need 1 instance to actually run, the something like this might work for you.
I have no experience with lockrun, but having a pre-set lock environment prior to the script actually running might help. Or it might not. You're just setting the test for the lockfile outside your script in a wrapper, and theoretically, couldn't you just hit the same race condition if two jobs were called by lockrun at exactly the same time, just as with the 'inside-the-script' solution?
File locking is pretty much honor system behavior anyways, and any scripts that don't check for the lockfile's existence prior to running will do whatever they're going to do. Just by putting in the lockfile test, and proper behavior, you'll be solving 99% of potential problems, if not 100%.
If you run into lockfile race conditions a lot, it may be an indicator of a larger problem, like not having your jobs timed right, or perhaps if interval is not as important as the job completing, maybe your job is better suited to be daemonized.
EDIT BELOW - 2016-05-06 (if you're using KSH88)
Base on @Clint Pachl's comment below, if you use ksh88, use mkdir
instead of noclobber
. This mostly mitigates a potential race condition, but doesn't entirely limit it (though the risk is miniscule). For more information read the link that Clint posted below.
lockdir=/var/tmp/mylock
pidfile=/var/tmp/mylock/pid
if ( mkdir ${lockdir} ) 2> /dev/null; then
echo $$ > $pidfile
trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT
# do stuff here
# clean up after yourself, and release your trap
rm -rf "$lockdir"
trap - INT TERM EXIT
else
echo "Lock Exists: $lockdir owned by $(cat $pidfile)"
fi
And, as an added advantage, if you need to create tmpfiles in your script, you can use the lockdir
directory for them, knowing they will be cleaned up when the script exits.
For more modern bash, the noclobber method at the top should be suitable.
Start your bash script with bash -x ./script.sh
or add in your script set -x
to see debug output.
Additional with bash
4.1 or later:
If you want to write the debug output to a separate file, add this to your script:
exec 5> debug_output.txt
BASH_XTRACEFD="5"
See: https://stackoverflow.com/a/25593226/3776858
If you want to see line numbers add this:
PS4='$LINENO: '
If you have access to
logger
command then you can use this to write debug output via your syslog with timestamp, script name and line number:
#!/bin/bash
exec 5> >(logger -t $0)
BASH_XTRACEFD="5"
PS4='$LINENO: '
set -x
# Place your code here
You can use option -p
of logger
command to set an individual facility and level to write output via local syslog to its own logfile.
Best Answer
This page lists two useful
csh
switches:-v
to show each command before variables are substituted, and-x
to show them afterwards.