Bash – a convenient way to run long list of commands, and show a message if something goes wrong

bashshell

Most Linux guides consist of pages like
"you need to run command_1, then command_2, then command_3" etc. Since I don't want to waste my time running all of them manually, I'd rather create a script

command_1
command_2
command_3

and run it once. But, more often than not, some commands will fail, and I will have no idea, which commands have failed. Also, usually all the rest commands make no sense if something failed earlier. So a better script would be something like

   (command_1 && echo OK command_1 || (echo FAILED command_1; false) )
&& (command_2 && echo OK command_2 || (echo FAILED command_2; false) )
&& (command_3 && echo OK command_3 || (echo FAILED command_3; false) )
&& echo DONE 
|| echo FAILED

But it requires to write too much boilerplate code, repeat each command 3 times, and there is too high chance, that I mistype some of the braces. Is there a more convenient way of doing what the last script does? In particular:

  • run commands sequentially
  • break if any command fails
  • write, what command has failed, if any
  • Allows normal interactions with commands: prints all output, and allows input from keyboard, if command asks anything.

Answers summary (2 January 2020)

There are 2 types of solutions:

  • Those, that allow to copy-paste commands from guide without modifications, but they don't print the failed command in the end. So, if failed command produced a very long output, you will have to scroll a lot of lines up, to see, what command has failed. (All top answers)
  • Those, that print the failed command in the last line, but require you to modify commands after copy-pasting them, either by adding quotations (answer by John), or by adding try statements and splitting chained commands into separate ones (answer by Jasen).

You rock folks, but I'll leave this question opened for a while. Maybe someone knows a solution that satisfies both needs (print failed command on the last line & allow copy-pasting of commands without their modifications).

Best Answer

One option would be to put the commands in a bash script, and start it with set -e.

This will cause the script to terminate early if any command exits with non-zero exit status.

See also this question on stack overflow: https://stackoverflow.com/q/19622198/828193

To print the error, you could use

trap 'do_something' ERR

Where do_something is a command you would create to show the error.

Here is an example of a script to see how it works:

#!/bin/bash

set -e
trap 'echo "******* FAILED *******" 1>&2' ERR

echo 'Command that succeeds'   # this command works
ls non_existent_file           # this should fail
echo 'Unreachable command'     # and this is never called
                               # due to set -e

And this is the output:

$ ./test.sh 
Command that succeeds
ls: cannot access 'non_existent_file': No such file or directory
******* FAILED *******

Also, as mentioned by @jick, keep in mind that the exit status of a pipeline is by default the exit status of the final command in it. This means that if a non-final command in the pipeline fails, that won't be caught by set -e. To fix this problem if you are concerned with it, you can use set -o pipefail


As suggested my @glenn jackman and @Monty Harder, using a function as the handler can make the script more readable, since it avoids nested quoting. Since we are using a function anyway now, I removed set -e entirely, and used exit 1 in the handler, which could also make it more readable for some:

#!/bin/bash

error_handler() {
  echo "******* FAILED *******" 1>&2
  exit 1
}

trap error_handler ERR

echo 'Command that succeeds'   # this command works
ls non_existent_file           # this should fail
echo 'Unreachable command'     # and this is never called
                               # due to the exit in the handler

The output is identical as above, though the exit status of the script is different.

Related Question