Bash – How to set Timeout for an ssh command and also get back the result of remote commands

bashrhelsshtimeout

I have a script that runs on bash, in an RHEL server that connects to thousands of nodes and returns with the values of some 5 executed commands – It collects info from RHEL Servers only.

It works fine, but the problem is that some nodes end up freezing when I run the following commands:

rpm -q <package-name>
rpm --queryformat '%{installtime:date} %{name}\n' -q <package-name>

Now, since this positively stops my script, I want to set a timeout for the ssh command, and exit the ssh session if it keeps waiting for some remote command to execute for too long [say 10 seconds]. I want to timeout and exit that ssh session and move on to the next node when this happens. How do i do this?

Here's the part of the script where I currently pull out the information and store it in the variable called dump [Please ignore my poor scripting, Im new at this]

dump=$(ssh -o ServerAliveCountMax=1 -o ServerAliveInterval=10 -o ConnectTimeout=10 -o BatchMode=yes $i "cat /proc/meminfo | grep -i \"memtotal\" | cut -d \":\" -f2 | tr -d \" \" | tr -d \"kB\"; cat /etc/redhat-release | cut -d \" \" -f7; dmidecode | grep -i \"prod\" | grep -vi \"desktop\"  | grep -iv \"id\" | cut -d \" \" -f3,4| tr \" \" \"_\" ; uptime | cut -d \" \" -f4,5 | tr \" \" \"_\" | tr -d \",\"; service kdump status 2>/dev/null | tr \" \" \"_\";");

Is there anyway to time this out if it keeps going for too long?

WHAT I ALREADY TRIED:

(ssh -q -o Batchmode=yes -o PasswordAuthentication=no -o ConnectTimeout=1 $i "rpm --queryformat '%{installtime:date} %{name}\n' -q \"kexec-tools\" | cut -d \" \" -f1,2,3,4|tr \" \" \"_\"" > /dev/null) & pid=$!
(sleep 10 && kill -HUP $pid ) 2>/dev/null & watcher=$!
if wait $pid 2>/dev/null; then
    pkill -HUP -P $watcher
    wait $watcher
else
    echo -e "$i Unable to ssh" >> res && continue
fi

However, this way, I am not being able to store the result of the remote rpm command.

Any help is extremely appreciated.

Best Answer

Use GNU Parallel to parallelize your collection:

parallel --slf rhel-nodes --tag --timeout 1000% --onall --retries 3 \
  "rpm -q {}; rpm --queryformat '%{installtime:date} %{name}\n' -q {}" \
  ::: bash bc perl

Put the nodes in ~/.parallel/rhel-nodes.

--tag will prepend the output with the name of the node. --timeout 1000% says that if a command takes 10 times longer than the median to run, it will be killed. --onall will run all commands on all servers. --retries 3 will run a command up to 3 times if it fails. ::: bash bc perl are the packages you want to test for. If you have many packages, use the cat packages | parallel ... syntax instead of the parallel ... ::: packages.

GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.

If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:

Simple scheduling

GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:

GNU Parallel scheduling

Installation

If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:

(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash

For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README

Learn more

See more examples: http://www.gnu.org/software/parallel/man.html

Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html

Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

Related Question