Environment variables containing functions are a bash hack. Zsh doesn't have anything similar. You can do something similar with a few lines of code. Environment variables contain strings; older versions of bash, before Shellshock was discovered, stored the function's code in a variable whose name is that of the function and whose value is () {
followed by the function's code followed by }
. You can use the following code to import variables with this encoding, and attempt to run them with bash-like settings. Note that zsh cannot emulate all bash features, all you can do is get a bit closer (e.g. to make $foo
split the value and expand wildcards, and make arrays 0-based).
bash_function_preamble='
emulate -LR ksh
'
for name in ${(k)parameters}; do
[[ "-$parameters[name]-" = *-export-* ]] || continue
[[ ${(P)name} = '() {'*'}' ]] || continue
((! $+builtins[$name])) || continue
functions[$name]=$bash_function_preamble${${${(P)name}#"() {"}%"}"}
done
(As Stéphane Chazelas, the original discoverer of Shellshock, noted, an earlier version of this answer could execute arbitrary code at this point if the function definition was malformed. This version doesn't, but of course as soon as you execute any command, it could be a function imported from the environment.)
Post-Shellshock versions of bash encode functions in the environment using invalid variable names (e.g. BASH_FUNC_myfunc%%
). This makes them harder to parse reliably as zsh doesn't provide an interface to extract such variable names from the environment.
I don't recommend doing this. Relying on exported functions in scripts is a bad idea: it creates an invisible dependency in your script. If you ever run your script in an environment that doesn't have your function (on another machine, in a cron job, after changing your shell initialization files, …), your script won't work anymore. Instead, store all your functions in one or more separate files (something like ~/lib/shell/foo.sh
) and start your scripts by importing the functions that it uses (. ~/lib/shell/foo.sh
). This way, if you modify foo.sh
, you can easily search which scripts are relying on it. If you copy a script, you can easily find out which auxiliary files it needs.
Zsh (and ksh before it) makes this more convenient by providing a way to automatically load functions in scripts where they are used. The constraint is that you can only put one function per file. Declare the function as autoloaded, and put the function definition in a file whose name is the name of the function. Put this file in a directory listed in $fpath
(which you may configure through the FPATH
environment variable). In your script, declare autoloaded functions with autoload -U foo
.
Furthermore zsh can compile scripts, to save parsing time. Call zcompile
to compile a script. This creates a file with the .zwc
extension. If this file is present then autoload
will load the compiled file instead of the source code. You can use the zrecompile
function to (re)compile all the function definitions in a directory.
Variables are always available to sub-processes. In:
a=1
(echo "$a")
you see 1.
I think what you meant is that you want the variable to have a local scope and be exported to the environment so that they are passed as environment variables to commands that are executed. The execution of a command is the thing that wipes the memory of a process (and the environment is a way to preserve some data across it), forking a child copies the entire memory so everything is preserved.
For that, you can use local -x
:
a=(1 2)
f() {
local -x a=3
typeset -p a
printenv a # printenv being *executed*
}
f
typeset -p a
gives:
typeset -x a=3
3
typeset -a a=( 1 2 )
Or you can export
it after having been declared local
:
a=(1 2)
f() {
local a=3
export a
typeset -p a
printenv a # printenv being *executed*
}
f
typeset -p a
Note that you can pass a variable in the environment of a single command without defining it otherwise as a shell variable with:
a=(1 2)
f() {
a=3 printenv a # printenv being *executed*
}
f
typeset -p a
Note that local
originated in the Almquist shell in the late 80s, but works differently from zsh
's. In the Almquist shell (and its descendants like dash and the sh of NetBSD/FreeBSD), local
only affects the scope of a variable and doesn't change the value or attributes of a variable.
zsh's local
works more like ksh93's typeset
in that it declares a brand new variable that is independent from the one in the outer scope.
ksh88, bash and pdksh's local
/typeset
try to do that as well but still inherit some attributes from the variable of the outer scope including the export attribute. That changed in ksh93 though note that ksh93 also switched to static scoping and only implements local scope in functions declared with the function f { ...; }
syntax.
Best Answer
export
in zsh is shorthand fortypeset -gx
, where the attributeg
means “global” (as opposed to local to a function) and the attributex
means “exported” (i.e. in the environment). Thus:This also works in ksh and bash.
If you never export
GREP_OPTIONS
in the first place, you don't need to unexport it.You can also use the indirect, portable way: unsetting a variable unexports it. In ksh/bash/zsh, this doesn't work if the variable is read-only.