The commands within each group run in parallel, and the groups run sequentially, each group of parallel commands waiting for the previous group to finish before starting execution.
The following is a working example:
Assume 3 groups of commands as in the code below. In each group the three commands are started in the background with &
.
The 3 commands will be started almost at the same time and run in parallel while the script waits
for them to finish.
After all three commands in the the third group exit, command 10
will execute.
$ cat command_groups.sh
#!/bin/sh
command() {
echo $1 start
sleep $(( $1 & 03 )) # keep the seconds value within 0-3
echo $1 complete
}
echo First Group:
command 1 &
command 2 &
command 3 &
wait
echo Second Group:
command 4 &
command 5 &
command 6 &
wait
echo Third Group:
command 7 &
command 8 &
command 9 &
wait
echo Not really a group, no need for background/wait:
command 10
$ sh command_groups.sh
First Group:
1 start
2 start
3 start
1 complete
2 complete
3 complete
Second Group:
4 start
5 start
6 start
4 complete
5 complete
6 complete
Third Group:
7 start
8 start
9 start
8 complete
9 complete
7 complete
Not really a group, no need for background/wait:
10 start
10 complete
$
Let me start by saying, you could just inline all the stuff you have in scannew
, since you're wait
ing anyway, unless you intend to scan again at some other point in your script. It's really the call to wc
that you're concerned might take too long, which, if it does, you can just terminate it. This is a simple way to set that up using trap
which allows you to capture signals sent to a process and set your own handler for it:
#! /usr/bin/env bash
# print a line just before we run our subshell, so we know when that happens
printf "Lets do something foolish...\n"
# trap SIGINT since it will be sent to the entire process group and we only
# want the subshell killed
trap "" SIGINT
# run something that takes ages to complete
BAD_IDEA=$( trap "exit 1" SIGINT; ls -laR / )
# remove the trap because we might want to actually terminate the script
# after this point
trap - SIGINT
# if the script gets here, we know only `ls` got killed
printf "Got here! Only 'ls' got killed.\n"
exit 0
However, if you want to retain the way you do things, with scannew
being a function run as a background job, it takes a bit more work.
Since you want user input, the proper way to do it is to use read
, but we still need the script to go on if scannew
completes and not just wait for user input forever. read
makes this a bit tricky, because bash
waits for the current command to complete before allowing trap
s to work on signals. The only solution to this that I know of, without refactoring the entire script, is to put read
in a while true
loop and give it a timeout of 1 second, using read -t 1
. This way, it'll always take at least a second for the process to finish, but that may be acceptable in a circumstance like yours where you essentially want to run a polling daemon that lists usb devices.
#! /usr/bin/env bash
function slow_background_work {
# condition can be anything of course
# for testing purposes, we're just checking if the variable has anything in it
while [[ -z $BAD_IDEA ]]
do
BAD_IDEA=$( ls -laR / 2>&1 | wc )
done
# `$$` normally gives us our own PID
# but in a subshell, it is inherited and thus
# gives the parent's PID
printf "\nI'm done!\n"
kill -s SIGUSR1 -- $$
return 0
}
# trap SIGUSR1, which we're expecting from the background job
# once it's done with the work we gave it
trap "break" SIGUSR1
slow_background_work &
while true
do
# rewinding the line with printf instead of the prompt string because
# read doesn't understand backslash escapes in the prompt string
printf "\r"
# must check return value instead of the variable
# because a return value of 0 always means there was
# input of _some_ sort, including <enter> and <space>
# otherwise, it's really tricky to test the empty variable
# since read apparently defines it even if it doesn't get input
read -st1 -n1 -p "prompt: " useless_variable && {
printf "Keypress! Quick, kill the background job w/ fire!\n"
# make sure we don't die as we kill our only child
trap "" SIGINT
kill -s SIGINT -- "$!"
trap - SIGINT
break
}
done
trap - SIGUSR1
printf "Welcome to the start of the rest of your script.\n"
exit 0
Of course, if what you actually want is a daemon that watches for changes in the number of usb devices or something, you should look into systemd
which might provide something more elegant.
Best Answer
You're already doing it.
Waiting for a command to finish is the shell's normal behavior. (Try typing
sleep 5
at a shell prompt.) The only time that doesn't happen is when you append&
to the command, or when the command itself does something to effectively background itself (the latter is a bit of an oversimplification).You can delete the
wait %%
command from your script; it probably just produces an error message likewait: %%: no such job
. (Question: does it actually print such a message?)Do you have any evidence that the
tar
command isn't completing before the/home/ftp.sh
command starts?Incidentally, it's a bit odd to have things other than users' home directories directly under
/home
.(I know most of this was already covered in comments, but I thought there should be an actual answer.)