Bash bug: apply Florian's patch now (CVE-2014-6277 and CVE-2014-6278)

OK, rebuild bash and deploy Florian's unofficial patch or its now-upstream version now. If you're a distro maintainer, please consider doing the same.



My previous post has more information about the original vulnerability (CVE-2014-6271). It also explains Tavis' and my original negative sentiment toward the original upstream patch. In short, the revised code did not stop bash from parsing the code seen in potentially attacker-controlled, remotely-originating environmental variables. Instead, the fix simply seeks to harden the parsing to prevent RCE. It relies on two risky assumptions:



  • That spare for this one bug we're fixing now, the process of parsing attacker-controlled functions is guaranteed to have no side effects on the subsequently executed trusted code.



  • That the underlying parser, despite probably not being designed to deal with attacker-supplied inputs, is free from the usual range of C language bugs.



From the very early hours, we have argued on the oss-security mailing list that a more reasonable approach would be to shield the parser from remotely-originating strings. I proposed putting the function export functionality behind a runtime flag or using a separate, prefixed namespace for the exported functions - so that variables such as HTTP_COOKIE do not go through this code path at all. Unfortunately, we made no real progress on that early in the game.



Soon thereafter, people started to bump into additional problems in the parser code. The first assumption behind the patch - the one about the parsing process not having other side effects - was quickly proved wrong by Tavis, who came up with a code construct that would get the parser in an inconsistent state, causing bash to create a bogus file and mangle any subsequent code that /bin/sh is supposed to execute.



This was assigned CVE-2014-7169 and led to a round of high-profile press reports claiming that we're still doomed, and people assigning the new bug CVSS scores all the way up to 11. The reality was a bit more nuanced: the glitch demonstrated by Tavis' code is a bit less concerning, because it does not translate into a universally exploitable RCE - at least not as far as we could figure it out. Some uses of /bin/sh would be at risk, but most would just break in a probably-non-exploitable way. The maintainer followed with another patch that locked down this specific hole.



The second assumption started showing cracks, too. First came a report from Todd Sabin, who identified an static array overflow error when parsing more than ten stacked redirects. The bug, assigned CVE-2014-7186, would cause a crash, but given the nature of the underlying assignment, immediate exploitability seemed fairly unlikely. Another probably non-security off-by-one issue with line counting in loops cropped up shortly thereafter (CVE-2014-7187).



The two latter issues do not have an officially released upstream patch at that point, but they prompted Florian Weimer of Red Hat to develop an unofficial patch that takes a seemingly more durable approach that we argued for earlier on: putting function exports in a separate namespace. Florian's fix effectively isolates the function parsing code from attacker-controlled strings in almost all the important use cases we can currently think of.



(One major outlier would be any solutions that rely on blacklisting environmental variables to run restricted shells or restricted commands as a privileged user - sudo-type stuff - but it's a much smaller attack surface and a a very dubious security boundary to begin with.)



Well... so, to get to the point: I've been fuzzing the underlying function parser on the side - and yesterday, bumped into a new parsing issue (CVE-2014-6277) that is almost certainly remotely exploitable and made easier to leverage due to the fact that bash is seldom compiled with ASLR. I'll share the technical details later on; for now, I sent the info to the maintainer of bash and to several key Linux distros. In general terms, it's an attempt to access uninitialized memory leading to reads from, and then subsequent writes to, a pointer that is fully within attacker's control. Here's a pretty telling crash:

bash[3054]: segfault at 41414141 ip 00190d96 ...<br />

Soon after posting this entry, I also bumped in the sixth and most severe issue so far, essentially permitting very simple and straightforward remote code execution (CVE-2014-6278) on the systems that are patched against the first bug. It's a "put your commands here" type of a bug similar to the original report. I will post additional details in a couple of days to give people enough time to upgrade.



At this point, I very strongly recommend manually deploying Florian's patch unless your distro is already shipping it. (Florian's patch has been also finally included upstream shortly after I first posted this entry.)



From within the shell itself, the simplest way to check if you already have it installed would be:

_x='() { echo vulnerable; }' bash -c '_x 2>/dev/null || echo not vulnerable'<br />

If the command shows "vulnerable", you don't have the patch and you are still vulnerable to a (currently non-public) RCE, even if you applied the original one (or the subsequent upstream patch that addressed the issue found by Tavis).

Quick notes about the bash bug, its impact, and the fixes so far

We spent a good chunk of the day investigating the now-famous bash bug (CVE-2014-6271), so I had no time to make too many jokes about it on Twitter - but I wanted to jot down several things that have been getting drowned out in the noise earlier in the day.



Let's start with the nature of the bug. At its core, the problem caused by an obscure and little-known feature that allows bash programs to export function definitions from a parent shell to children shells, similarly to how you can export normal environmental variables. The functionality in action looks like this:

$ function foo { echo "hi mom"; }<br />$ export -f foo<br />$ bash -c 'foo' # Spawn nested shell, call 'foo'<br />hi mom<br />

The behavior is implemented as a hack involving specially-formatted environmental variables: in essence, any variable starting with a literal "() {" will be dispatched to the parser just before executing the main program. You can see this in action here:

$ foo='() { echo "hi mom"; }' bash -c 'foo'<br />hi mom<br />

The concept of granting magical properties to certain values of environmental variables clashes with several ancient customs - most notably, with the tendency for web servers such as Apache to pass client-supplied strings in the environment to any subordinate binaries or scripts. Say, if I request a CGI or PHP script from your server, the env variables $HTTP_COOKIE and $HTTP_USER_AGENT will be probably initialized to the raw values seen in the original request. If the values happen to begin with "() {" and are ever seen by /bin/bash, events may end up taking an unusual turn.



And so, the bug we're dealing with stems from the observation that trying to parse function-like strings received in HTTP_* variables could have some unintended side effects in that shell - namely, it could easily lead to your server executing arbitrary commands trivially supplied in a HTTP header by random people on the Internet.



With that out of the way, it is important to note that the today's patch provided by the maintainer of bash does not stop the shell from trying to parse the code within headers that begin with "() {" - it merely tries to get rid of that particular RCE side effect, originally triggered by appending commands past the end of the actual function def. But even with all the current patches applied, you can still do this:

Cookie: () { echo "Hello world"; }<br />

...and witness a callable function dubbed HTTP_COOKIE() materialize in the context of subshells spawned by Apache; of course, the name will be always prefixed with HTTP_*, so it's unlikely to clash with anything or be called by incident - but intuitively, it's a pretty scary outcome.



In the same vein, doing this will also have an unexpected result:

Cookie: () { oops<br />

If specified on a request to a bash-based CGI script, you will see a scary bash syntax error message in your error log.



All in all, the fix hinges on two risky assumptions:



  1. That the bash function parser invoked to deal with variable-originating function definitions is robust and does not suffer from the usual range of low-level C string parsing bugs that almost always haunt similar code - a topic that, when it comes to shells, hasn't been studied in much detail before now. (In fact, I am aware of a privately made now disclosed report of such errors in the parser - CVE-2014-7186 and CVE-2014-7187.)



    Update (Sep 26): I also bumped into what seems to be a separate and probably exploitable use of an uninitialized pointer in the parser code; shared the details privately upstream.



  2. That the parsing steps are guaranteed to have no global side effects within the child shell. As it happens, this assertion has been already proved wrong by Tavis (CVE-2014-7169); the side effect he found probably-maybe isn't devastating in the general use case (at least until the next stroke of brilliance), but it's certainly a good reason for concern.



    Update (Sep 26): Found a sixth and most severe issue that is essentially equivalent to the original RCE on all systems that only have the original, maintainer-provided patch.





Contrary to multiple high-profile reports, the original fix was not "broken" in the sense that there is no universal RCE exploit for it - but if I were a betting man, I would not bet on the patch holding up in the long haul (Update: as noted above, it did not hold up). A more reasonable solution would involve temporarily disabling function imports, putting them behind a runtime flag, or blacklisting some of the most dangerous variable patterns (e.g., HTTP_*); and later on, perhaps moving to a model where function exports use a distinct namespace while present in the environment.



What else? Oh, of course: the impact of this bug is an interesting story all in itself. At first sight, the potential for remote exploitation should be limited to CGI scripts that start with #!/bin/bash and to several other programs that explicitly request this particular shell. But there's a catch: on a good majority of modern Linux systems, /bin/sh is actually a symlink to /bin/bash!



This means that web apps written in languages such as PHP, Python, C++, or Java, are likely to be vulnerable if they ever use libcalls such as popen() or system(), all of which are backed by calls to /bin/sh -c '...'. There is also some added web-level exposure through #!/bin/sh CGI scripts, <!--#exec cmd="..."> calls in SSI, and possibly more exotic vectors such as mod_ext_filter.



For the same reason, userland DHCP clients that invoke configuration scripts and use variables to pass down config details are at risk when exposed to rogue servers (e.g., on open wifi). A handful of MTAs, MUAs, or FTP server architectures may be also of concern - in particular, there are third-party reports of qmail installations being at risk. Finally, there is some exposure for environments that use restricted SSH shells (possibly including Git) or restricted sudo commands, but the security of such approaches is typically fairly modest to begin with.



Exposure on other fronts is possible, but probably won't be as severe. The worries around PHP and other web scripting languages, along with the concern for userspace DHCP, are the most significant reasons to upgrade - and perhaps to roll out more paranoid patches, rather than relying solely on the two official ones. On the upside, you don't have to worry about non-bash shells - and that covers a good chunk of embedded systems out there. In particular, contrary to several claims, Busybox should be fine.



Update (Sep 28): the previously-unofficial namespace isolation patch from Florian has eventually made it upstream. You should deploy that patch ASAP.



PS. As for the inevitable "why hasn't this been noticed for 15 years" / "I bet the NSA knew about it" stuff - my take is that it's a very unusual bug in a very obscure feature of a program that researchers don't really look at, precisely because no reasonable person would expect it to fail this way. So, life goes on.

Who's Afraid of Deflation?

Everyone knows that deflation is bad. Bad, bad, bad. Why is it bad? Well, we learned it in school. We learned it from the pundits on the news. The Great Depression. Japan. What, are you crazy? It's bad. Here, let Ed Castranova explain it to you (Wildcat Currency, pp.160-61):

Deflation means that all prices are falling and the currency is gaining in value. Why is this a disaster? ... If you hold paper money and see that it is actually gaining in value, it may occur to you that you can increase your purchasing power--make a profit--by not spending it...But if many people hold on to their money, this can dramatically reduce real economic activity and growth...

In this post, I want to report some data that may lead people to question this common narrative. Note, I am not saying that there is no element of truth in the interpretation (maybe there is, maybe there isn't). And I do not want to question the likely bad effects that come about owing to a large unexpected deflation (or inflation).  What I want to question is whether a period of prolonged moderate (and presumably expected) deflation is necessarily associated with periods of depressed economic activity. Most people certainly seem to think so. But why?

The first example I want to show you is for the postbellum United States (source):


Following the end of the U.S. civil war, the price-level (GDP deflator) fell steadily for 35 years. In 1900, it was close to 50% of its 1865 value. In the meantime, real per capita GDP grew by 85%. That's an average annual growth rate of about 1.8% in real per capita income. The average annual rate of deflation was about 2%. I wonder how many people are aware of this "disaster?"

O.K., well maybe that was just long ago. Sure. Let's take a look at some more recent data from the United States, the United Kingdom, and Japan. The sample period begins in 2009 (the trough of the Great Recession) and ends in late 2013. Here is what the price level dynamic looks like since 2009:


Over this five year period, the price level is up about 7% in the United States and about 11% in the United Kingdom. As for Japan, well, we all know about the Japanese deflation problem. Over the same period of time, the price level in Japan fell by almost 7%.

Now, I want you to try to guess what the recovery dynamic--measured in real per capita GDP--looks like for each of these countries. Surely, the U.K. must be performing relatively well, Japan relatively poorly, and the U.S. somewhere in the middle?

You would be correct in supposing that the U.S. is somewhere in the middle:


But you would have mixed up the U.K. with Japan. Since the trough of the past recession, Japanese real per capita GDP is up 15% (as of the end of 2013)--roughly 3% annual growth rate. Is deflation really so bad? Maybe the Japanese would like the U.K. style inflation instead? I don't get it.

I have some more evidence to contradict the notion of deflation discouraging spending (transactions). The evidence pertains to Bitcoin and the data is available here: Blockchain.

Many people are aware of the massive increase in the purchasing power of Bitcoin over the past couple of years (i.e., a massive deflationary episode). As is well-known, the protocol is designed such that the total supply of bitcoins will never exceed 21M units. In the meantime, this virtual currency and payment system continues to see its popularity and use grow.


One might think that given the prospect of continued long run deflation--i.e, price appreciation (it's hard to believe that holders of bitcoin are thinking anything else)--that people would generally be induced to hoard and not spend their bitcoins. And yet, available data seems to suggest that this may not be the case:


Maybe deflation is not so bad after all?  Let's hope so, because we may all have to start getting used to the idea!

Additional readings:
[1] Good vs. Bad Deflation: Lessons from the Gold Standard Era (Michael Bordo and Angela Redish).

[2] Deflation and Depression: Is There an Empirical Link? (Andy Atkeson and Pat Kehoe).

[3] The Postbellum Deflation and its Lessons for Today (David Beckworth).

Useriøs kritikk

Oljefondet slik det forvaltes i dag har en nærmest uendelig tidshorisont. Den siste ukes debatt om oljepengebruken viser imidlertid hvor stor politisk betydning kortsiktige svingninger kan ha. Politiske hensyn kan derfor innebære betydelige begrensinger i fondets investeringsstrategi.

Det er derfor slett ikke sikkert at det er hensiktsmessig for fondet å ta mer kortsiktig risiko for å utnytte fondets langsiktige perspektiv, slik fondets strategiråd anbefaler. Store svingninger i fondets verdi kan utløse krav om endring i forvaltningsstrategi eller større uttak.

Et mål på langsiktighet er gjennomsnittlig tilbakebetalingstid. Dersom du for eksempel låner ut penger rentefritt i 10 år, så er gjennomsnittlig tilbakebetaling på 10 år. Mottar du renter underveis, blir gjennomsnittet litt mindre enn 10 år.

Det lar seg gjøre å beregne dette for oljefondet. Regner vi med avkastning og uttak fra Oljefondet på fire prosent hvert år frem til solen sluker jorden, så er gjennomsnittlig tilbakebetalingstid 650 år. Det er evigvarende nok for de fleste.

Men Oljefondet er ikke som et lån. Det består av risikable investeringer som svinger mye i verdi fra et år til et annet. Handlingsregelen sier at vi kan bruke fire prosent av fondet hvert år. Men er det store endringer i fondets verdi fra et år til et annet, så blir det store variasjoner i hvor mange milliarder disse fire prosentene representerer.

I figuren ser vi verdiutviklingen til Oljefondet sammen med bruken av oljepenger. Det er en nokså klar motsatt sammenheng her. Når børsene går opp, så synker oljepengebruken under handlingsregelen. Den viktigste forklaringen på variasjonen i faktisk oljepengebruk ser altså ut til å være svingningene i verdens finansmarkeder.

Finansdepartementet tilfører altså statsbudsjettet en stabilt voksende kontantstrøm fra oljesektoren. Vi bruker i dag nesten fem ganger mer oljepenger enn i 2002. Svingninger i finansmarkedenes humør fører imidlertid at oljepengebruken i prosent av fondet varierer mye. Dette betyr at handlingsregelen praktiseres slik den sannsynligvis var ment; som en regel for langsiktig innfasing av oljepengene.
Alternativt kan vi tenke oss en mer mekanisk regel hvor vi bruker akkurat fire prosent av fondet hvert år, uten hensyn til om verdens børser er preget av eufori eller depresjon.

En slik praksis ville imidlertid støte på store praktiske problemer. Skulle for eksempel tunnelprosjekter stoppes når det er nedtur i verdens finansmarkeder, for så å startes opp igjen når optimismen er tilbake?

I en stadig mer integrert og globalisert verden vil vi også se at når verdens økonomi går dårlig, så sliter også Norge, selv om erfaringen fra finanskrisen ikke var slik. Det er ikke sikkert vi skånes neste gang. Da er det jo dumt om vi har en regel som innebar enorme kutt i offentlig sektor når krisen er på sitt verste fordi markedsverdien av oljefondet samtidig har kollapset. 

Et annet problem er at et slik jojo-prinsipp vil skape problemer for selve forvaltningen av fondet. Dersom uttakene bestemmes av siste års tap eller gevinst, så vil avkastningen bli mer ustabil. I så fall må fondet redusere risikoen, noe som igjen betyr lavere avkastning.

Det er derfor nokså opplagt at uttaket ikke kan være basert på fondets børsverdi til enhver tid. Kritikken mot finansminister Siv Jensen fra krefter i eget parti fremstår derfor som ganske urimelige. En kan selvsagt ha ulik oppfatning av hvor stort uttaket skal være. Noen mener at fondet skal tappes over tid, mens andre mener det er riktig med et lavere men evigvarende uttak. En debatt om uttaket over tid skal være høyere eller lavere enn fire prosent er helt legitimt.

Å gi finansdepartementet og kritikk for at de ikke automatisk justerer opp oljepengebruken til rundt fire prosent av fondet i anledning børsfesten, virker imidlertid useriøst. Carl I. Hagen har for vane å sammenligne finansdepartementets økonomer med skredderne i «Keiserens nye klær». Det ser imidlertid ut for meg som om skredderne har gjort en aldeles utmerket jobb her.


CVE-2014-1564: Uninitialized memory with truncated images in Firefox

The recent release of Firefox 32 fixes another interesting image parsing issue found by american fuzzy lop: following a refactoring of memory management code, the past few versions of the browser ended up using uninitialized memory for certain types of truncated images, which is easily measurable with a simple <canvas> + toDataURL() harness that examines all the fuzzer-generated test cases.



In general, problems like that may leak secrets across web origins, or more prosaically, may help attackers bypass security measures such as ASLR. For a slightly more detailed discussion, check out this post.



Here's a short proof-of-concept that should work if you haven't updated to 32 yet:



This is tracked as CVE-2014-1564, Mozilla bug 1045977. Several more should be coming soon.

Some notes on web tracking and related mechanisms

Artur Janc and I put together a nice, in-depth overview of all the known fingerprinting and tracking vectors that appear to be present in modern browsers. This is an interesting, polarizing, and poorly-studied area; my main hope is that the doc will bring some structure to the discussions of privacy consequences of existing and proposed web APIs - and help vendors and standards bodies think about potential solutions in a more holistic way.


That's it - carry on!