Bash script and local env variable namespace collision

bashenvironment-variables

I have a bash script with a variable called VAR_A

I also have a local env variable called VAR_A

The bash script calls the command:

echo ${VAR_A}

I am not able to change the variable in the script to another name, but I can change how the script calls the echo command.

How can I modify the echo command to ensure it's printing the local env variable instead of the one provided in the script?

UPDATE FOR CLARITY:

The situation:

User has an existing .bashrc file which on login, sets:

VAR_A="someValue"
export VAR_A

This allows the user to:

~]$ echo ${VAR_A}
    someValue

I have a configuration file for some bash scripts

VAR_A="someOtherValue"

I have a bash script:

#!/bin/bash

. ../configuration  # imports config file with some values

#  do stuff

echo ${VAR_A}

Executing the script from the terminal (bash shell as logged in user) prints:

~]$ ./run_script.sh
    someOtherStuff

I need it to print:

~]$ ./run_script.sh
    someStuff

Best Answer

Shell variables are initialised from environment variables in every shell, you can't get around that.

When the shell starts, for every environment variable it receives that has a valid name as a shell variable, the shell assigns the corresponding shell variable the corresponding value. For instance, if your script is started as:

env VAR_A=xxx your-script

(and has a #!/bin/bash - she-bang), env will execute /bin/bash and pass VAR_A=xxx to that bash command, and bash will assign its $VAR_A variable the value xxx.

In the Bourne shell and in the C-shell, if you assign a new value to that shell variable, it doesn't affect the corresponding env variable passed to later commands executed by that shell, you have to use export or setenv for that (note however that in the Bourne shell if you unset a variable, it removes both the shell variable and environment variable).

In:

env VAR=xxx sh -c 'VAR=yyy; other-command'

(with sh being the Bourne shell, not modern POSIX shells) Or:

env VAR=xxx csh -c 'set VAR = yyy; other-command'

other-command receives VAR=xxx in its environment, not VAR=yyy, you'd need to write it:

env VAR=xxx sh -c 'VAR=yyy; export VAR; other-command'

or

env VAR=yyy csh -c 'setenv VAR yyy; other-command'

For other-command to receive VAR=yyy in its environment.

However, ksh (and POSIX as a result, and then bash and all other modern Bourne-like shells as a result) broke that.

Upon start-up those modern shells bind their shell variable to the corresponding environment variable.

What that means is that a script may clobber the environment just by setting one of its variables even if it doesn't export it. Some shells are even known to remove the environment variables it cannot map to shell variables (which is why it's recommended to only use shell-mappable variable names for environment variable names).

That's a main reason why by convention, all uppercase variables should be reserved for environment variables.

To work around that, if you want the commands executed by your script to receive the same environment as the shell interpreting your script received, you'd need to store that environment somehow. You can do it by adding:

my_saved_env=$(export -p)

at the start of your script, and then run your commands with:

(eval "$my_saved_env"; exec my-other-command and its args)

In the bosh and mksh shells, local variables in functions don't clobber environment variables (as long as you don't export them), though note that shell builtins which use some special variables (like HOME for cd, PATH for executable lookup...) would use the value of the shell (local) variable, not the environment one.

$ env VAR=env mksh -c 'f() { local VAR=local; echo "$VAR"; printenv VAR; }; f'
local
env
Related Question