Bash – the difference between running a command directly and with `bash -c`

bash

For a command (builtin or external), what is the difference of
running it directly in a bash shell process and with bash -c in
the bash shell? What are their advantages and disadvantages
comparing to each other?

For example, in a bash shell, run `date` directly, and run `bash -c
date`. Also consider a builtin command instead of an external one.

Best Answer

  1. The -c option allows programs to run commands.  It’s a lot easier to fork and do

    execl("/bin/sh", "sh", "-c", "date | od -cb  &&  ps > ps.out", NULL);
    

    than it is to fork, create a pipe, fork again, call execl in each child, call wait, check the exit status, fork again, call close(1), open the file, ensure that it is open on file descriptor 1, and do another execl.  I believe that this was the reason why the option was created in the first place.

    The system() library function runs a command by the above method.

  2. It provides a way to take an arbitrarily complex command and make it look like a simple command.  This is useful with programs that run a user-specified command, such as find … -exec or xargs.  But you already knew that; it was part of the answer to your question, How to specify a compound command as an argument to another command?
  3. It can come in handy if you’re running an interactive shell other than bash.  Conversely, if you are running bash, you can use this syntax

    $ ash  -c "command"
                 ︙
     
    $ csh  -c "command"
                 ︙
     
    $ dash -c "command"
                 ︙
     
    $ zsh  -c "command"
                 ︙

    to run one command in another shell, as all of those shells also recognize the -c option.  Of course you could achieve the same result with

    $ ash
    ash$ command
            ︙
    ash$ exit
     
    $ csh
    csh$ command
            ︙
    csh$ exit
     
    $ dash
    dash$ command
            ︙
    dash$ exit
     
    $ zsh
    zsh$ command
            ︙
    zsh$ exit

    I used ash$ , etc., to illustrate the prompts from the different shells; you probably wouldn’t actually get those.

  4. It can come in handy if you want to run one command in a “fresh” bash shell; for example,

    $ ls -lA
    total 0
    -rw-r--r-- 1 gman gman   0 Apr 14 20:16 .file1
    -rw-r--r-- 1 gman gman   0 Apr 14 20:16 file2
    
    $ echo *
    file2
    
    $ shopt -s dotglob
    
    $ echo *
    .file1 file2
    
    $ bash -c "echo *"
    file2
    

    or

    $ type shift
    shift is a shell builtin
    
    $ alias shift=date
    
    $ type shift
    shift is aliased to ‘date’
    
    $ bash -c "type shift"
    shift is a shell builtin
    
  5. The above is a misleading over-simplification.  When bash is run with -c, it is considered a non-interactive shell, and it does not read ~/.bashrc, unless is -i specified.  So,

    $ type cp
    cp is aliased to ‘cp -i’          # Defined in  ~/.bashrc
     
    $ cp .file1 file2
    cp: overwrite ‘file2’? n
     
    $ bash -c "cp .file1 file2"
                                      # Existing file is overwritten without confirmation!
    $ bash -c -i "cp .file1 file2"
    cp: overwrite ‘file2’? n

    You could use -ci, -i -c or -ic instead of -c -i.

    This probably applies to some extent to the other shells mentioned in paragraph 3, so the long form (i.e., the second form, which is actually exactly the same amount of typing) might be safer, especially if you have initialization/configuration files set up for those shells.

  6. As Wildcard explained, since you’re running a new process tree (a new shell process and, potentially, its child process(es)), changes to the environment made in the subshell cannot affect the parent shell (current directory, values of environment variables, function definitions, etc.)  Therefore, it’s hard to imagine a shell builtin command that would be useful when run by sh -c.  fg, bg, and jobs cannot affect or access background jobs started by the parent shell, nor can wait wait for them.  sh -c "exec some_program" is essentially equivalent to just running some_program the normal way, directly from the interactive shell.  sh -c exit is a big waste of time.  ulimit and umask could change the system settings for the child process, and then exit without exercising them.

    Just about the only builtin command that would be functional in a sh -c context is kill.  Of course, the commands that only produce output (echo, printf, pwd and type) are unaffected, and, if you write a file, that will persist.

  7. Of course you can use a builtin in conjunction with an external command; e.g.,
    sh -c "cd some_directory; some_program"
    but you can achieve essentially the same effect with a normal subshell:
    (cd some_directory; some_program)
    which is more efficient.  The same (both parts) can be said for something like
    sh -c "umask 77; some_program"
    or ulimit (or shopt).  And since you can put an arbitrarily complex command after -c — up to the complexity of a full-blown shell script — you might have occasion to use any of the repertoire of builtins; e.g., source, read, export, times, set and unset, etc.
Related Question