bash stores exported function definitions as environment variables. Exported functions look like this:
$ foo() { bar; }
$ export -f foo
$ env | grep -A1 foo
foo=() { bar
}
That is, the environment variable foo
has the literal contents:
() { bar
}
When a new instance of bash launches, it looks for these specially crafted environment variables, and interprets them as function definitions. You can even write one yourself, and see that it still works:
$ export foo='() { echo "Inside function"; }'
$ bash -c 'foo'
Inside function
Unfortunately, the parsing of function definitions from strings (the environment variables) can have wider effects than intended. In unpatched versions, it also interprets arbitrary commands that occur after the termination of the function definition. This is due to insufficient constraints in the determination of acceptable function-like strings in the environment. For example:
$ export foo='() { echo "Inside function" ; }; echo "Executed echo"'
$ bash -c 'foo'
Executed echo
Inside function
Note that the echo outside the function definition has been unexpectedly executed during bash startup. The function definition is just a step to get the evaluation and exploit to happen, the function definition itself and the environment variable used are arbitrary. The shell looks at the environment variables, sees foo
, which looks like it meets the constraints it knows about what a function definition looks like, and it evaluates the line, unintentionally also executing the echo (which could be any command, malicious or not).
This is considered insecure because variables are not typically allowed or expected, by themselves, to directly cause the invocation of arbitrary code contained in them. Perhaps your program sets environment variables from untrusted user input. It would be highly unexpected that those environment variables could be manipulated in such a way that the user could run arbitrary commands without your explicit intent to do so using that environment variable for such a reason declared in the code.
Here is an example of a viable attack. You run a web server that runs a vulnerable shell, somewhere, as part of its lifetime. This web server passes environment variables to a bash script, for example, if you are using CGI, information about the HTTP request is often included as environment variables from the web server. For example, HTTP_USER_AGENT
might be set to the contents of your user agent. This means that if you spoof your user agent to be something like '() { :; }; echo foo', when that shell script runs, echo foo
will be executed. Again, echo foo
could be anything, malicious or not.
As per the official Cygwin Installation Page:
Installing and Updating Cygwin for 64-bit versions of Windows
Run setup-x86_64.exe any time you want to update or install a Cygwin
package for 64-bit windows. The signature for setup-x86_64.exe can be
used to verify the validity of this binary using this public key.
I had a hunch this bash was affected to, so about 15 minutes before you posted your question I did as the setup page instructed.
There is no need for a 3rd Party Script. I believe the process went different for me because I had not cleaned out my Download Directory at C:\Cygwin64\Downloads
The setup utility Scanned my currently installed packages, and I left the defaults alone. As such, all packages in the base system were updated. One of these happened to be the bash that is affected by the CVE-2014-6271. You can see proof that you are protected by the following screenshot:
Please note that I do not know if this update protects against the other vulnerabilities that have been discovered, so please follow the above procedure the next few days until this issue is completely fixed.
Best Answer
TL;DR
The shellshock vulnerability is fully fixed in
If your bash shows an older version, your OS vendor may still have patched it by themselves, so best is to check.
If:
shows "vulnerable", you're still vulnerable. That is the only test that is relevant (whether the bash parser is still exposed to code in any environment variable).
Details.
The bug was in the initial implementation of the function exporting/importing introduced on the 5th of August 1989 by Brian Fox, and first released in bash-1.03 about a month later at a time where bash was not in such widespread use, before security was that much of a concern and HTTP and the web or Linux even existed.
From the ChangeLog in 1.05:
Some discussions in gnu.bash.bug and comp.unix.questions around that time also mention the feature.
It's easy to understand how it got there.
bash exports the functions in env vars like
And on import, all it has to do is interpret that with the
=
replaced with a space... except that it should not blindly interpret it.It's also broken in that in
bash
(contrary to the Bourne shell), scalar variables and functions have a different name space. Actually if you havebash
will happily put both in the environment (yes entries with same variable name) but many tools (including many shells) won't propagate them.One would also argue that bash should use a
BASH_
namespace prefix for that as that's env vars only relevant from bash to bash.rc
uses afn_
prefix for a similar feature.A better way to implement it would have been to put the definition of all exported variables in a variable like:
That would still need to be sanitized but at least that could not be more exploitable than
$BASH_ENV
or$SHELLOPTS
...There is a patch that prevents
bash
from interpreting anything else than the function definition in there (https://lists.gnu.org/archive/html/bug-bash/2014-09/msg00081.html), and that's the one that has been applied in all the security updates from the various Linux distributions.However, bash still interprets the code in there and any bug in the interpreter could be exploited. One such bug has already been found (CVE-2014-7169) though its impact is a lot smaller. So there will be another patch coming soon.
Until a hardening fix that prevents bash to interpret code in any variable (like using the
BASH_FUNCDEFS
approach above), we won't know for sure if we're not vulnerable from a bug in the bash parser. And I believe there will be such a hardening fix released sooner or later.Edit 2014-09-28
Two additional bugs in the parser have been found (CVE-2014-718{6,7}) (note that most shells are bound to have bugs in their parser for corner cases, that wouldn't have been a concern if that parser hadn't been exposed to untrusted data).
While all 3 bugs 7169, 7186 and 7187 have been fixed in following patches, Red Hat pushed for the hardening fix. In their patch, they changed the behaviour so that functions were exported in variables called
BASH_FUNC_myfunc()
more or less preempting Chet's design decision.Chet later published that fix as an official upstreams bash patch.
That hardening patch, or variants of it are now available for most major Linux distribution and eventually made it to Apple OS/X.
That now plugs the concern for any arbitrary env var exploiting the parser via that vector including two other vulnerabilities in the parser (CVE-2014-627{7,8}) that were disclosed later by Michał Zalewski (CVE-2014-6278 being almost as bad as CVE-2014-6271) thankfully after most people had had time to install the hardening patch
Bugs in the parser will be fixed as well, but they are no longer that much of an issue now that the parser is no longer so easily exposed to untrusted input.
Note that while the security vulnerability has been fixed, it's likely that we'll see some changes in that area. The initial fix for CVE-2014-6271 has broken backward compatibility in that it stops importing functions with
.
or:
or/
in their name. Those can still be declared by bash though which makes for an inconsistent behaviour. Because functions with.
and:
in their name are commonly used, it's likely a patch will restore accepting at least those from the environment.Why wasn't it found earlier?
That's also something I wondered about. I can offer a few explanations.
First, I think that if a security researcher (and I'm not a professional security researcher) had specifically been looking for vulnerabilities in bash, they would have likely found it.
For instance, if I were a security researcher, my approaches could be:
bash
gets input from and what it does with it. And the environment is an obvious one.bash
interpreter is invoked and on what data. Again, it would stand out.bash
is setuid/setgid, which makes it an even more obvious place to look.Now, I suspect nobody thought to consider
bash
(the interpreter) as a threat, or that the threat could have come that way.The
bash
interpreter is not meant to process untrusted input.Shell scripts (not the interpreter) are often looked at closely from a security point of view. The shell syntax is so awkward and there are so many caveats with writing reliable scripts (ever seen me or others mentioning the split+glob operator or why you should quote variables for instance?) that it's quite common to find security vulnerabilities in scripts that process untrusted data.
That's why you often hear that you shouldn't write CGI shell scripts, or setuid scripts are disabled on most Unices. Or that you should be extra careful when processing files in world-writeable directories (see CVE-2011-0441 for instance).
The focus is on that, the shell scripts, not the interpreter.
You can expose a shell interpreter to untrusted data (feeding foreign data as shell code to interpret) via
eval
or.
or calling it on user provided files, but then you don't need a vulnerability inbash
to exploit it. It's quite obvious that if you're passing unsanitized data for a shell to interpret, it will interpret it.So the shell is called in trusted contexts. It's given fixed scripts to interpret and more often than not (because it's so difficult to write reliable scripts) fixed data to process.
For instance, in a web context, a shell might be invoked in something like:
What can possibly go wrong with that? If something wrong is envisaged, that's about the data fed to that sendmail, not how that shell command line itself is parsed or what extra data is fed to that shell. There's no reason you'd want to consider the environment variables that are passed to that shell. And if you do, you realise it's all env vars whose name start with "HTTP_" or are well known CGI env vars like
SERVER_PROTOCOL
orQUERYSTRING
none of which the shell or sendmail have any business to do with.In privilege elevation contexts like when running setuid/setgid or via sudo, the environment is generally considered and there have been plenty of vulnerabilities in the past, again not against the shell itself but against the things that elevate the privileges like
sudo
(see for instance CVE-2011-3628).For instance,
bash
doesn't trust the environment when setuid or called by a setuid command (thinkmount
for instance that invokes helpers). In particular, it ignores exported functions.sudo
does clean the environment: all by default except for a white list, and if configured not to, at least black lists a few that are known to affect a shell or another (likePS4
,BASH_ENV
,SHELLOPTS
...). It does also blacklist the environment variables whose content starts with()
(which is why CVE-2014-6271 doesn't allow privilege escalation viasudo
).But again, that's for contexts where the environment cannot be trusted: any variable with any name and value can be set by a malicious user in that context. That doesn't apply to web servers/ssh or all the vectors that exploit CVE-2014-6271 where the environment is controlled (at least the name of the environment variables is controlled...)
It's important to block a variable like
echo="() { evil; }"
, but notHTTP_FOO="() { evil; }"
, becauseHTTP_FOO
is not going to be called as a command by any shell script or command line. And apache2 is never going to set anecho
orBASH_ENV
variable.It's quite obvious some environment variables should be black-listed in some contexts based on their name, but nobody thought that they should be black-listed based on their content (except for
sudo
). Or in other words, nobody thought that arbitrary env vars could be a vector for code injection.As to whether extensive testing when the feature was added could have caught it, I'd say it's unlikely.
When you test for the feature, you test for functionality. The functionality works fine. If you export the function in one
bash
invocation, it's imported alright in another. A very thorough testing could have spotted issues when both a variable and function with the same name are exported or when the function is imported in a locale different from the one it was exported in.But to be able to spot the vulnerability, it's not a functionality test you would have had to do. The security aspect would have had to be the main focus, and you wouldn't be testing the functionality, but the mechanism and how it could be abused.
It's not something that developers (especially in 1989) often have at the back of their mind, and a shell developer could be excused to think his software is unlikely to be network exploitable.