The GNU coreutils timeout
command is extremely handy for certain scripting situations, allowing for using the output of a command if it is quick to run, and skipping it if it would take too long.
How can I approximate the basic behavior of timeout
using only POSIX specified utilities?
(I'm thinking it may involve a combination of wait
, sleep
, kill
and who knows what else, but perhaps I'm missing an easier approach.)
Best Answer
My approach would be this one:
Execute command as background process 1
Execute "watchdog timer" as background process 2
Set up a handler to trap a termination signal in the parent shell
Wait for both processes to complete. The process that terminates first, sends the termination signal to the parent.
The parent's trap handler kills both background processes via job control (one of them has already terminated by definition, but that kill will be a harmless no-op because we are not using PIDs, see below)
I tried to circumvent the possible race condition addressed in the comments by using the shell's job control IDs (which would be unambiguous within this shell instance) to identify the background processes to kill, instead of system PIDs.
Result for
TIMEOUT=10
(command terminates before watchdog):Result for
TIMEOUT=1
(watchdog terminates before command):Result for
TIMEOUT=5
(watchdog and command terminate "almost" simultaneously):