I'm using launchctl to load/start my python script and it works to a certain degree. It launches ever 120s, but some times my script takes 500s to run and my theory is that is I have a process running it restarts it rather than letting the first one run.
What I think happens:
– launch tester.py (tester.py estimated time to complete 400s)
– after 120s
– launch tester.py again and abandon the first one
What I want:
To finish the first tester.py not restart it.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>BuildNotification.py</string>
<key>ProgramArguments</key>
<array>
<string>/usr/bin/python</string>
<string>/Users/xcuer/tester.py</string>
</array>
<key>StartInterval</key>
<integer>120</integer>
<key>TimeOut</key>
<integer>7200</integer>
<key>ExitTimeOut</key>
<integer>7200</integer>
</dict>
</plist>
Best Answer
launchd
focuses on launching jobs and keeping jobs running, it does not have a mechanism to handle overlapping jobs.Lock File
Traditionally in a UNIX environment, a lock file is used to stop processes from being run multiple times.
The core steps are:
On macOS, create your lock file in
/var/tmp
for computer wide processes.Sample Implementation
See Quick-and-dirty way to ensure only one instance of a shell script is running at a time and What is the best way to ensure only one instance of a Bash script is running? for sample scripts.
Potential Problems
There are edge cases.
launchd
wants jobs to run for at least n seconds before finishing. When the script finds an existing lock file, consider sleeping for n seconds and then exiting.What happens if your script is killed or exits because of an error? Can you be certain the lock file is removed?
In C, a trick to ensure a file's removal is to create, open, and delete the file – a deleted file held open on UNIX will remain until the opening process exits. The file is deleted even if the process crashes.
In a shell script, catch the terminate signal and ensure the file is removed.
Another potential problem is checking for the file existing. This check could happen factions of second before the previous script finishes. This will be rare but possible. The approach quoted above claims to overcome this.