It is called Brace Expansion and is present also in zsh
.
One important difference between bash and zsh is that in zsh parameter expansion is performed inside braces, but in bash this is not the case.
Environment variables containing functions are a bash hack. Zsh doesn't have anything similar. You can do something similar with a few lines of code. Environment variables contain strings; older versions of bash, before Shellshock was discovered, stored the function's code in a variable whose name is that of the function and whose value is () {
followed by the function's code followed by }
. You can use the following code to import variables with this encoding, and attempt to run them with bash-like settings. Note that zsh cannot emulate all bash features, all you can do is get a bit closer (e.g. to make $foo
split the value and expand wildcards, and make arrays 0-based).
bash_function_preamble='
emulate -LR ksh
'
for name in ${(k)parameters}; do
[[ "-$parameters[name]-" = *-export-* ]] || continue
[[ ${(P)name} = '() {'*'}' ]] || continue
((! $+builtins[$name])) || continue
functions[$name]=$bash_function_preamble${${${(P)name}#"() {"}%"}"}
done
(As Stéphane Chazelas, the original discoverer of Shellshock, noted, an earlier version of this answer could execute arbitrary code at this point if the function definition was malformed. This version doesn't, but of course as soon as you execute any command, it could be a function imported from the environment.)
Post-Shellshock versions of bash encode functions in the environment using invalid variable names (e.g. BASH_FUNC_myfunc%%
). This makes them harder to parse reliably as zsh doesn't provide an interface to extract such variable names from the environment.
I don't recommend doing this. Relying on exported functions in scripts is a bad idea: it creates an invisible dependency in your script. If you ever run your script in an environment that doesn't have your function (on another machine, in a cron job, after changing your shell initialization files, …), your script won't work anymore. Instead, store all your functions in one or more separate files (something like ~/lib/shell/foo.sh
) and start your scripts by importing the functions that it uses (. ~/lib/shell/foo.sh
). This way, if you modify foo.sh
, you can easily search which scripts are relying on it. If you copy a script, you can easily find out which auxiliary files it needs.
Zsh (and ksh before it) makes this more convenient by providing a way to automatically load functions in scripts where they are used. The constraint is that you can only put one function per file. Declare the function as autoloaded, and put the function definition in a file whose name is the name of the function. Put this file in a directory listed in $fpath
(which you may configure through the FPATH
environment variable). In your script, declare autoloaded functions with autoload -U foo
.
Furthermore zsh can compile scripts, to save parsing time. Call zcompile
to compile a script. This creates a file with the .zwc
extension. If this file is present then autoload
will load the compiled file instead of the source code. You can use the zrecompile
function to (re)compile all the function definitions in a directory.
Best Answer
Support for
{var}>...
was added toksh93
,bash
andzsh
at the same time on a suggestion of azsh
developer. The{var}>...
operator works inzsh
, but not for compound commands.Also note that while in:
The fd 3 is open only for
cmd
, inThe dynamically allocated fd (stored in
$var
) remains open aftercmd
returns in bothzsh
andbash
. That operator is mostly designed to be used withexec
(see alsosysopen
inzsh
for a more straightforward interface to theopen()
system call).So your code above is missing
exec {tmp}>&-
to release that fd afterwards inbash
.So here you could do:
Which would work in
bash
,zsh
andksh93
and not leak a fd. (the quotes around$tmp
are only needed inbash
and when itsposix
option is not enabled). Note that inksh93
or whenzsh
orbash
are in POSIX mode, a failingexec
causes the shell to exit (dup()
failing here could be caused by stdout being closed or some limit on number of open files being reached or other pathological cases for which you may want to exit anyway).But here, you don't need a dynamically allocated fd, just use fd 3 for instance which is not used in that code:
Which would work in any Bourne-like shell.
Like in the dynamic fd approach above even if it's not as obvious, if the
dup2()
(in3>&1
) fails, the assignment will not be run, so you may want to make sureerrors
is initialised before (with aunset -v errors
for instance).Note that it doesn't matter whether the fd 3 is otherwise open or in use in the rest of the script (that original fd if open will be left untouched and restored at the end), what matters is whether the code that you are embedded inside the
$(...)
expects fd 3 to be open.Only fds 0, 1 and 2 are expected to be open by applications, other fds are not.
ls
andtr
don't expect anything about fd 3. Cases where you may need to use a different fd is when your code explicitly makes use of that fd and expects it to have been open beforehand like if instead ofls
, you hadcat /dev/fd/3
where fd 3 is expected to have been open to some resource somewhere earlier in your script.To answer the question on how to assign the first free fd in POSIX shells, I don't think there's a way with the POSIX shell and utilities API. It may also not make sense. The shell may do what it wants internally with any fd provided that doesn't get in the way of its own API. For instance, you may find that fd 11 is free now, but may later be used by the shell for something internal and you writing to it could affect its behaviour. Also note that in POSIX sh, you can only manipulate fds 0 to 9.