I have some conditions for some background job to run:
condition-command && condition-command && background-job &
The problem is: I want the conditions to block until the job runs, as if I had run:
condition-command; condition-command; background-job &
But it isn't a condition, if previous command fails I do not want the job to run.
I realised it is asynchronous, but it should not, in my mind the both following scripts should be the same, but they do not:
sleep 2; echo foo & sleep 1; echo bar; wait # prints foo, then bar: correct
sleep 2 && echo foo & sleep 1; echo bar; wait # prints bar, then foo: bug
I know if I test $?
variable it would work, or if I put the last one inside a subshell (but then I would lost job controls, I want to avoid daemons), but I want to know why bash make it this way, where is it documented? Is there any way to prevent this behaviour?
Edit: Chained if
s is disgusting, that is why I will not accept it as a alternative way.
Edit 2: I know a subshell is possible, but it will not work for me, let us imagine I want to run a bunch of commands then wait
in the end. It will be possible if i check the existence of /proc/$PID
directory, but it would be a pain in the neck if there are several jobs.
Edit 3: The main question is WHY bash does it, where is it documented? Whether or not it has a solution is a bonus!
Best Answer
If you don't want the background to apply to the whole line, then use
eval
:Now only the second command will be a background job, and it will be a proper background job that you can
wait
on.