Ssh – Cleanup trap for ssh command on multiple remote hosts

linuxsshtrap:

I want to execute an arbitrary command on multiple remote hosts using ssh. These commands are mostly long running commands that monitor server resources and aggregate the output to my local workstation (like tail -f, mpstat or tcpdump/tcpflow with grep etc).

The problem is that I haven't found a way that keep the command running while later reliably terminating both the SSH connection and the remote command for all hosts.

I've tried many ssh flag variations, wait, cat, read, traps etc… Either both are killed instantly, the local terminal gets stuck or the remote command keeps running on the remote hosts after ssh connection was killed.

This is what I have at the moment.

function server_ssh() {
  pids=""
  for box in 01 02 03; do
    ssh -t -t -f server$box "$1" &
    pids="$pids $!"
  done
  trap 'kill -9 $pids' SIGINT
  # wait, cat, read ??
}

Best Answer

Looks like something for gnu parallel. Try something like this:

function server_ssh() {
  parallel --line-buffer ssh server{} "$1" ::: 01 02 03
}

--line-buffer option is needed to allow mixing output of the different commands together (by default it groups output of each command together, so has to wait until each finishes to display).

-u should work here also. Normally it could allow parts of lines to get mixed up together, but since the commands you're using are likely line-based that shouldn't happen.

Edit: --line-buffer is in alpha testing in the version i have (20130922-1), i'd go with -u.

Related Question