Here is a list of all executables that come with TortoiseSVN 1.7.4:
ConnectVPN.exe
SubWCRev.exe
SubWCRevCOM.exe
svn.exe
svnadmin.exe
svndumpfilter.exe
svnlook.exe
svnrdump.exe
svnserve.exe
svnsync.exe
svnversion.exe
TortoiseBlame.exe
TortoiseIDiff.exe
TortoiseMerge.exe
TortoisePlink.exe
TortoiseProc.exe
TortoiseUDiff.exe
TSVNCache.exe
Although I find it unlikely that you would have to focus on most of those. Usually you will only see TortoiseProc
(for commit/update operations) and TSVNCache
, which keeps track of file states (to display those icon overlays in Explorer).
Why are you even going after TSVN processes? Usually they shouldn't cause any trouble.
Ideally, I would like to not modify the configuration file above.
Tough! It's the right thing to do.
You need to change your exec
into script
, and stop running that python program in a forked subprocess as part of a pipeline. This ServerFault answer explains how to do this in an embedded shell script. I'd make just one change to the script given there, in the last line:
exec python -u /opt/XYZ/my_prog.py 2>&1
There's no really good reason not to log standard error too, after all.
Ever more complex gyrations to cope with forking, from expect daemon
to switching to systemd
, miss the point that the right thing to do is to stop the daemon from forking. If there's one good thing to come out of the current kerfuffle, it's the continued confirmation that what IBM wrote and recommended in 1995 has been right all these years.
Get used to the idea of chain loading daemons. There are plenty of toolsets that make such things simple. Get used to the idea of not using shell scripts, too. There are plenty of toolsets that are designed specifically for this work, that eliminate the overheads of shells (which is a known good idea in the Ubuntu world).
For example: The shell commands in the ServerFault answer can be replaced with a script that uses Laurent Bercot's execline
tools which are designed to be able to do this very thing without subshells and unlinked FIFOs:
#!/command/execlineb -PW
pipeline -w {
logger -t my_prog.py
}
fdmove -c 2 1
python -u /opt/XYZ/my_prog.py
which you would then simply
exec /foo/this_execlineb_script
With my nosh
toolset, it would similarly be a script containing:
#!/usr/local/bin/nosh
pipe
fdmove -c 2 1
python -u /opt/XYZ/my_prog.py | logger -t my_prog.py
Or alternatively, one could have this stanza directly in the Upstart job definition (using a trick to avoid shell metacharacters so that Upstart doesn't spawn a shell):
exec /usr/local/bin/exec pipe --separator SPLIT fdmove -c 2 1 python -u /opt/XYZ/my_prog.py SPLIT logger -t my_prog.py
Further reading
Best Answer
You can try: