Cookies v. The People

For some reason, The Wall Street Journal launched a new, large-scale offensive on web cookies - and decided to focus on the purported malice of Microsoft in particular:


"All the latest Web browsers, including Internet Explorer, let consumers turn on a feature that prevents third-party browser cookies from being installed on their computers. But those settings aren't always easy to find. Only one major browser, Apple's Safari, is preset to block all third-party cookies, in the interest of user privacy.


The Internet Explorer planners proposed a feature that would block any third-party content that turned up on more than 10 visited websites, figuring that anything so pervasive was likely to be a tracking tool.


When he heard of the ideas, Mr. McAndrews, the executive involved with Microsoft's Internet advertising business, was angry, according to several people familiar with the matter. Mr. McAndrews feared the Explorer group's privacy plans would dramatically reduce the effectiveness of online advertising by curbing the data that could be collected about consumers."



I do not have any insight into the decision process behind browser features at Microsoft - and it would be unfortunate if this factor alone had such a significant bearing on the final outcome. I do know, however, that the characterization of third-party cookie blocking as an important privacy feature is grossly misguided at best - and that there are compelling technical arguments to be made in favor of not enabling it by default.


The fundamental problem is that for better or worse, browsers necessarily make it trivial to track users across cooperating websites, without any need for the actors to appear malicious or evil. Quite simply, every computer system is unique, and browsers, by design, offer a substantial insight into it: very few other people share exactly the same browser and OS version, uptime, browser window size, installed fonts and applications as you - and so, reliable browser instance fingerprinting is certainly not science fiction.


This obvious possibility aside, there are many types of core web features that offer functionality essentially identical to cookies, and are depended on by much of the Internet; for example, RFC2616 caching allows long-lived tokens to be stored and retrieved through HTTP headers such as ETag, or simply embedded in persistently cached JavaScript code. The only reason why cookies are preferred is that they are well-known, purpose-built, have well-understood security properties, and can be managed by users easily. I encourage you to check out Ed Felten's excellent essay for more: the alternatives are very easy to embrace, but will suck for consumers more.


It is possible to build a reasonably anonymous browser, but only by crippling many of the essential features that make the modern web tick; products addressed to the general public should probably not go there. Disabling third-party cookies alone feels like a knee-jerk reaction that really does nothing to improve your privacy - and actually impacts your security. A striking example is that a ban on third-party cookies makes it very difficult to create XSRF-resilient single sign-on systems for complex, SOP-compartmentalized web applications (at least unless you introduce a dependency on JavaScript - the other Great Satan of the Internet).


To add insult to injury, because of compatibility issues, the existing third-party cookie blocking mechanisms gradually morphed into honor systems anyway: one implementation allows cookies to be set once the third-party frame is interacted with (which can be facilitated without user knowledge by having a transparent, invisble frame follow the mouse pointer for a while). Another allows cookies to be read and modified after the initial visit to a particular "third-party" site. A yet another implementation allows servers to declare good intentions by specifying a special HTTP header (P3P) to simply bypass the mechanism.


Given the way the web works, the most realistic way to improve user privacy is to create a community standard for notifying well-behaved players about your privacy preferences, and allowing them to comply. It will actually work better than the inevitable technological whack-a-mole with cookie-equivalent mechanisms: malicious parties will have the ability to track you for the foreseeable future anyway - but with explicit preference declarations, parties who want to be seen as reputable would not be able to assume that cookies are blocked simply because this is how your browser ships - and promptly switch to an alternative tracking mechanism in good faith. Commercial search engines obey robots.txt, so this system has a chance of working, too. If you disagree and distrust corporations, legislative approaches to privacy protection may be your only remaining bet.


Speaking of advisory privacy mechanisms, Microsoft actually deserves some credit rather than blame - namely, for supporting the aforementioned P3P signaling in their products: the associated HTTP headers are used to make cookie policy decisions in Internet Explorer, and not in any other browser. Alas, the protocol is a bit of a cautionary tale by itself: W3C attempted to create a complex, all-encompassing, legally binding framework to compel businesses to make honest, site-wide declarations; and the concept eventually collapsed under its own weight. Large businesses are extremely hesitant to use P3P, out of the risk of increasing their legal footprint; while small-scale web developers are simply intimidated by the monumental 110 page specification, and copy off recipes from random places on the web, with little or no regard for their intended meaning.


So yeah, privacy is hard. Blaming a browser vendor is easy. It's just not very productive.

"Testing takes time"

When explaining why it is not possible to meet a particular vulnerability response deadline, most software vendors inevitably fall back to a very simple and compelling argument: testing takes time.


For what it's worth, I have dealt with a fair number of vulnerabilities on both sides of the fence - and I tend to be skeptical of such claims: while exceptions do happen, many of the disappointing response times appeared to stem from trouble allocating resources to identify and fix the problem, and had very little to do with testing the final patch. My personal experiences are necessarily limited, however - so for the sake of this argument, let's take the claim at its face value.


To get to the root of the problem, it is important to understand that software quality assurance is an imperfect tool. Faulty code is not written to intentionally cripple the product; it's a completely unintended and unanticipated consequence of one's work. The same human failings that prevent developers from immediately noticing all the potential side effects of their code, also put limits of what's possible in QA: there is no way to reliably predict what will go wrong with modern, incredibly complex software. You have to guess in the dark.


Because of this, most corporations simply learn to err on the side of caution: settle on a maximum realistically acceptable delay between code freeze and a release (one that still keeps you competitive!) - and then structure the QA work to be compatible with this plan. There is nothing special about this equilibrium: given resources, there is always much more to be tested; and conversely, many of the current steps could probably be abandoned without affecting the quality of the product. It's just that going in that first direction is not commercially viable - and going in the other just intuitively feels wrong.


Once a particular organization has such an QA process in place, it is tempting to treat critical security problems similar to feature enhancements: there is a clear downside to angering customers with a broken fix; on the other hand, and as long as vulnerability researchers can be persuaded to engage in long-term bug secrecy, there is seemingly no benefit in trying to get this class of patches out the door more quickly than the rest.


This argument overlooks a crucial point, however: vulnerabilities are obviously not created by the researchers who spot them; they are already in the code, and tend to be rediscovered by unrelated parties, often at roughly the same time. Hard numbers are impossible to arrive at, but based on my experience, I expect a sizable fraction of current privately reported vulnerabilities (some of them known to vendors for more than a year!) to available independently to multiple actors - and the longer these bugs allowed to persist, the more pronounced this problem is bound to become.


If this is true, then secret vulnerabilities pose a definite and extremely significant threat to the IT ecosystem. In many cases, this risk is far greater than the speculative (and never fully eliminated) risk of occasional patch-induced breakage; particularly when one happens to be a high-profile target.


Vendors often frame the dilemma the following way:


"Let's say there might be an unspecified vulnerability in one of our products.


Would you rather allow us to release a reliable fix for this flaw at some point in the future; or rush out something potentially broken?"


Very few large customers will vote in favor of dealing with a disruptive patch - IT departments hate uncertainty and fire drills; but I am willing to argue that a more honest way to frame the problem would be:


"A vulnerability in our code allows your machine to be compromised by others; there is no widespread exploitation, but targeted attacks are a tangible risk to some of you. Since the details are secret, your ability to detect or work around the flaw is practically zero.


Do you prefer to live with this vulnerability for half a year, or would you rather install a patch that stands an (individually low) chance of breaking something you depend on? In the latter case, the burden of testing rests with you.


Or, if you are uncomfortable with the choice, would you be inclined to pay a bit more for our products, so that we can double our QA headcount instead?"


The answer to that second set of questions is much less obvious - and more relevant to the problem at hand; depriving the majority of your customers of this choice, and then effectively working to conceal this fact, just does not feel right.


Yes, quality assurance is hard. It can also be expensive to better parallelize or improve automation in day-to-day QA work; and it is certainly disruptive to revise the way one releases and supports products (heck, some vendors still prefer to target security fixes for the next major version of their application, simply because that's what their customers are used to). It is also likely that if you make any such profound changes, something will eventually go wrong. None of these facts makes the problem go away, though.


Indefinite bug secrecy hurts us all by removing all real incentives for improvement, and giving very little real security in return.

Rebooting responsible disclosure!

I am very proud to see this official blog post out:



I am proud of this post not because it adds a yet another voice in the ongoing debate; I am proud because I think it is important and significant for a major commercial vendor to suck it up - and take a genuine, passionate stand behalf of all users instead.


The rhetoric invoked by most software vendors today is very one-sided, and aims to portray the disclosure debate as a petty feud with selfish, arrogant researchers oblivious to the realities of doing business. I find this disingenuous - and so, I sincerely hope that this blog post sets a brand new stone rolling.


PS. On a related note: it's now $3,133.7!

Guerrilla CNC home manufacturing guide

There are about three people in the world who could possibly ever care about this epic work - so today, I am happy to unveil my least useful project to date: the 70,000 word CNC machining and resin casting guide for hobbyist robot builders:



You can also check out my current project.


Thank you. We now resume our regularly scheduled programming.

Hi! I'm a security researcher, and here's your invoice.

It always struck me as a simple deal: there are benefits to openly participating in the security research community - peer recognition and job opportunities. There is also a cost of doing it as a hobby - loss of potential income in other pursuits. After having made a name for themselves, some people decide that the benefits no longer offset the costs - and stop spending their time on non-commercial projects. Easy, right?


Well, this is not what's on the minds of several of my respected peers. Somewhere in 2009, Alex Sotirov, Charlie Miller, and Dino Dai Zovi announced that there will be no more free bugs; in Charlie's own words:


"As long as folks continue to give bugs to companies for free, the companies will never appreciate (or reward) the effort. So I encourage you all to stop the insanity and stop giving away your hard work."


The three researchers did not feel adequately compensated for their (unsolicited) research, and opted not to disclose this information to vendors or the public - but continued the work in private, and sometimes boasted about the inherently unverifiable, secret finds.


Is this a good strategy? I think it is important to realize that many vendors, being driven by commercial incentives, spend exactly as much on security engineering as they think is appropriate - and this is influenced chiefly by external factors: PR issues, contractual obligations, regulatory risks. Full disclosure puts many of the poor performers under intense public scrutiny, and may force them to try harder and hire security talent (that's you!).


Exactly because of this unwanted pressure, they do not inherently benefit from the unsolicited services, and will probably not work with you to nourish them: if you "threaten" them by promising to essentially stop being a PR problem (unless compensated) - well, don't be surprised if they do not call back soon with a counter-offer.


Having said that, there is an interesting way one could make this work: the "pay us or else..." approach - where the "else" part may be implied to mean:


  • Selling the information to unnamed third parties, to use it as they see fit (with potential consequences to the vendor's customers),


  • Shaming the vendor in public to suggest negligence ("company X obviously values customer safety well below our $10,000 asking price"),


  • Simply tellling the world without giving the vendor a chance to respond if your demands are not met.

There's only one problem: I think these tricks are extremely sleazy. There are good and rather uncontroversial reasons why disclosing true information about an individual is often legal, but engaging in blackmail never is; the parallels here are really easy to draw.


This is why I am disappointed by the news of VUPEN apparently adopting a similar strategy (full article); and equally disappointed by how few people called it out:


"French security services provider VUPEN claims to have discovered two critical security vulnerabilities in the recently released Office 2010 – but has passed information on the vulnerabilities and advice on mitigation to its own customers only. For now, the company does not intend to fill Microsoft in on the details, as they consider the quid pro quo – a mention in the credits in the security bulletin – inadequate.


'Why should security services providers give away for free information aimed at making paid-for software more secure?' asked [VUPEN CEO] Bekrar."


Here's the thing: security researchers don't have to give any information away for free; but if you need to resort to arm-twisting tactics to sell a service, you have some serious soul searching to do.

En klargjøring

I sitt tilsvar til meg i kapital nr 12. forsøker Petter Berge fra Northern Capital å så tvil om de fakta jeg har påpekt om aktiv i forhold til passiv forvaltning. Nok en gang tar Berge stort sett feil.

I sin opprinnelige kommentar skrev for eksempel Berge om Deutsche Banks (DB) kostnadsfrie indeksfond at ”selv om de ikke tar forvaltningshonorar, så beholder de utbyttet fra aksjene selv”. Men som Nikolai Gemerott fra DB forklarer: ”… Hence we do not retain dividends from the fund as this would be illegal.

Når jeg så påpeker at Berges opplysninger er feil, endres argumentasjonen til at DB bruker en feil referanseindeks som ikke er utbyttejustert. Det er noe uklart for meg hvordan feil referanse i seg selv kan skade investor. Det kan naturligvis få fondet til å se bedre ut enn det i virkeligheten er, men avkastningen bestemmes jo av forvaltningen av selve fondet. Berge trår uansett feil også her. I kilden han bygger på står det: ”The indices are calculated as either price or total return indices. The difference is based on the different treatment of dividend payments on the Index Securities”. En kan altså velge mellom to indeksfond; ett der utbyttene reinvesteres og ett der de utbetales. For sistnevnte fond brukes naturligvis en prisindeks som ikke er utbyttejustert som referanse.

Litt informasjon om fondet kan man finne i denne artikkelen. Som man vil se er det lite sannsynlig at bruk av derivater er en stor kostnadsdriver for fondet siden det har slått indeksen med 0,61% siden oppstart i 2007 frem til august 2009.

Så mener Berge at Vanguards VITPX fond ikke er relevant fordi det kun representerer en liten del av markedet, men fondet følger en indeks som representerer 99,5% av markedskapitaliseringen til amerikanske børsnoterte selskaper. Dette er ikke er hentet fra utdatert reklamemateriell som Berge later til å tro, men fra fondets ”Fact Sheet” som kan lastes ned fra den tilhørende hjemmeside. Jeg legger naturligvis til grunn at de opplysninger som Vanguard oppgir om sine produkt er korrekte.

Berge reiser også et helt nytt argument. Det viser seg nemlig at Vanguards verdensindeks VTWIX koster 0,25%, hvilket er mer enn det oljefondet har. Betyr det at passiv forvaltning ikke fungerer for en verdensindeks? Kanskje, men den mest nærliggende forklaringen er at VITPX er 140 ganger større enn verdensfondet. I motsetning til aktiv forvaltning er det betydelige stordriftsfordeler i passiv forvaltning. Om en må fordele de samme kostnadene på noen promille av kapitalen blir naturligvis kostnaden høy. For oljefondet er det likevel all grunn til å tro at en investering i VTWIX vil kunne gjøres til betydelig lavere kostnad. Vanguard har antydet overfor meg at global forvaltning kan gjøres for 0,01-0,02% dersom investeringsbeløpet er av oljefondets dimensjoner.

Hele denne debatten hadde imidlertid vært unødvendig dersom finansdepartementet bestemte seg for å sette ut deler av forvaltningen til et eksternt passivt fond slik som Vanguard. På den måten hadde vi visst nøyaktig hva kostnadene ved passiv forvaltning var, og man kunne også fortsatt med aktiv forvaltning i NBIM. Mon tro hvorfor det er så stor motstand mot en slik enkel løsning?