Yes, you are right, this is similar to "How can I make environment variables "exported" in a shell script stick around?".
If you define a variable as:
COUNTER=$((COUNTER+1))
then it exists in the current shell only. It will not be seen either by subshells this shell creates or by the calling shell. When using export:
export COUNTER=$((COUNTER+1))
then the variable is also seen by this shell's subshells.
When you create 5 processes with xargs, they each inherit the environment of the calling shell. They however do not share any changes to the environment with each other.
Without -t
, sshd
gets the stdout of the remote shell (and children like sleep
) and stderr via two pipes (and also sends the client's input via another pipe).
sshd
does wait for the process in which it has started the user's login shell, but also, after that process has terminated waits for eof on the stdout pipe (not the stderr pipe in the case of openssh at least).
And eof happens when there's no file descriptor by any process open on the writing end of the pipe, which typically only happens when all the processes that didn't have their stdout redirected to something else are gone.
When you use -t
, sshd
doesn't use pipes. Instead, all the interaction (stdin, stdout, stderr) with the remote shell and its children are done using one pseudo-terminal pair.
With a pseudo-terminal pair, for sshd
interacting with the master side, there's no similar eof handling and while at least some systems provide alternative ways to know if there are still processes with fds open to the slave side of the pseudo-terminal (see @JdeBP comment below), sshd
doesn't use them, so it just waits for the termination of the process in which it executed the login shell of the remote user and then exits.
Upon that exit, the master side of the pty pair is closed which means the pty is destroyed, so processes controlled by the slave will receive a SIGHUP (which by default would terminate them).
Edit: that last part was incorrect, though the end result is the same. See @pynexj's answer for a correct description of what exactly happens.
Best Answer
The outer loop that you have is basically
This would start ten concurrent instances of
some_compound_command
in the background. They will be started as fast as possible, but not quite "all at the same time" (i.e. ifsome_compound_command
takes very little time, then the first may well finish before the last one starts).The fact that
some_compound_command
happens to be a loop is not important. This means that the code that you show is correct in that iterations of the innerj
-loop will be running sequentially, but all instances of the inner loop (one per iteration of the outeri
-loop) would be started concurrently.The only thing to keep in mind is that each background job will be running in a subshell. This means that changes made to the environment (e.g. modifications to values of shell variables, changes of current working directory with
cd
, etc.) in one instance of the inner loop will not be visible outside of that particular background job.What you may want to add is a
wait
statement after your loop, just to wait for all background jobs to actually finish, at least before the script terminates: