Address bar and the sea of darkness

The current contents of the address bar are our only god.


Really. There is nothing else: browsers do not have any other universal, reliable content origin indicator, and no way to predict where you will be taken next. People who do not understand this, or who do not understand the URL syntax, will suffer. Over and over again.


It is fair to note that way too many users fall into this category; in fact, even the experts can't always be sure. Guess where the following URLs will take you in MSIE, Firefox, and Chrome:


  • http://example.com\@coredump.cx/
  • http://example.com;.coredump.cx/


Chances are, you got the answers wrong. The problem is easy to pin squarely on the users, but it's the geeks who created a huge gap between the skill level needed to proficiently operate a browser, and the skill level required to do so safely. The health of the entire networked ecosystem suffers as a result.


This gap is one of the great unsolved problems in information security - and it calls for fundamental changes to how web browsers interact with the users and identify sites. Alas, not every quick kludge is necessarily a good one: careless users will be exactly just as doomed if we outlaw HTTP authentication, change onclick behavior, rework tooltips, or close all the open redirectors in the world. The few hundred remaining pages in the relevant RFCs make the world interesting. Please, pick your battles wisely.

Vulnerability trading markets and you

There is something interesting going on in the security industry: we are witnessing the rapid emergence of vulnerability trading markets. Perhaps hundreds of security researchers now routinely sell exploits to intermediaries for an easy profit (anywhere from $1,000 to $50,000), instead of the more usual practice of talking to the vendors or announcing their findings publicly. The buyers in turn resell the knowledge to unspecified end users, most likely at several times the original price tag. Some of the intermediaries may eventually release the information to the public; others withhold it indefinitely. The latter bunch is willing to pay you a lot more.


Curiously, both classes of intermediaries often ask for weaponized, multi-platform exploits, and not just a nice write-up on the nature of the glitch. Why? Some use cases in the IDS industry could be strenuously made, but I do not find them all that believable. More likely, at the end of the chain, you can find buyers with questionable intentions and a clear business reason to justify the significant expense, yet maintain anonymity. When asked about their clients, the intermediaries usually allude to unspecified government agencies - but even if this somewhat uncomfortable claim is true, the researcher does not get to choose which government he may be aiding with his work.


Many people find it difficult to sympathize with Jethro's legal troubles: he did not hesitate to take cash for an exploit that he had every reason to suspect would be used for illegal purposes. Are the proxy arrangement practiced in institutionalized exploit trade really that different? I'm not sure: can the sellers honestly claim they understand who wants these exploits, and why do these tools happen to be so unusually valuable? And if not, should they be selling them to the highest bidder, no questions asked?


Of course, there is an argument made by Charlie Miller and several other researchers that the vendors should not be entitled to free vulnerability research services from the security community. Maybe so - although it's worth noting that researchers profit from that bona fide work by gaining recognition and respect, and landing cool jobs later on; vendors gain much less from the extra public scrutiny, and some of them would probably prefer for this "free" arrangement to go away completely. But in any case, I do not think this argument genuinely supports the idea of selling the information to third parties with no regard of how it may be used: it may be legal, and it may be profitable, but it certainly does not feel right.

Responsibilities in vulnerability disclosure

The debate around responsible disclosure is as old as the security industry itself, and unlikely to be settled any time soon. Tellingly, both sides of the debate claim to be driven by the same motive - to keep users safe. Yet, both accuse the opponent of saying so under false pretenses: vendors and businesses see full disclosure proponents as attention whores, while researchers think vendors only care only about PR and legal liability damage control.


The controversy will continue, and it would be pointless to recapture it here. Having said that, I have an issue with one of the common assumptions made in this debate: the belief that vulnerabilities are unlikely to be discovered by multiple parties at once, and therefore, the original finder of a flaw is in a unique position to control the information. Intuitively, it sounds pretty reasonable: security research is hard, and the necessary skills are nearly impossible to formalize or imitate. The press, in particular, likes to think of vulnerability finding as an arcane form of art. But is it so?


Over the years, I have probably found over 200 vulnerabilities in high-profile client- and server-side apps. I think it is a pretty good data set to work with - and curiously, I am strongly convinced that none of these findings should be attributed to my unique skill. It feels that a vast majority of these findings were just a matter of the security community reaching a certain critical body of knowledge - gaining a better understanding of what can go wrong, where to look for it, and how to automate the testing with simple fuzzers and similar validation frameworks. At that point, finding bugs is simply a matter of picking a target to go after; who happens to be behind the wheel is largely immaterial.


What's more, I found that when you go after a sufficiently buggy and complex application, most of the problems you find would turn out to be dupes of what other researchers discovered weeks or months earlier. This pattern proved to be particularly prevalent in the browser world, where I had multiple bug collisions with Georgi Guninski or Amit Klein.


I suspect the same can be said by a vast majority of other security researchers - though not all of them are willing to make the same self-deprecating admission in public. Sadly, by enjoying being portrayed as wizards, we are also making it easier for vendors to advocate the view that the discovery of a vulnerability is what creates a threat - and that researchers have an obligation to wait indefinitely to help protect users against attacks.


While giving a responsive vendor some advance notification is often a good idea, creating a social pressure on researchers to wait for patches removes any incentive for vendors to respond in a timely manner. This would not be a problem if vendors were consistently awesome - but they certainly aren't today. We are commonly seeing some of the leading proponents of responsible disclosure taking from six months to two years to address even fairly simple, high-risk bugs - and seldom facing any criticism for this. Researchers who behave "irresponsibly", on the other hand, are routinely called names if they are lucky; and are formally or informally threatened if not.


Vulnerability disclosure, however done, does not make you less secure. More often that we are willing to admit, it merely brings out an existing risk from the thriving underground market and into the spotlight. Naturally, this can be disruptive in the short run, which is why the practice is controversial; it's certainly easier not to have to scramble to fix an issue on a short notice. That said, timely and verbose disclosure also levels the playing field by keeping vendors accountable, and giving all users the information needed to limit exposure - even if by stopping to use a particular service until a fix is available.

I skyttergraven med oljefondet

Norges Bank Investment Manager (NBIM) som forvalter oljefondet, bruker seks sider i årsrapporten for 2009 til å forsvare hva de driver på med. Det store antall feil og unøyaktigheter og et jevnt over lavt faglig nivå overrasker. En skulle tro at noen av verdens beste forvaltere i et av verdens største fond skulle klare bedre enn dette.

Den største feilen er at det argumenteres mot noe som ingen er for. Bruk av såkalte ”stråmenn” er en velkjent debatteknikk, men det hører jo ikke hjemme i en faglig debatt. NBIM lager eksempelvis et kjempepoeng av at det vil bli veldig dyrt å drive passiv forvaltning, for hver gang et selskap utsteder nye aksjer så må fondet inn og kjøpe ”sin andel”. I den prosessen vil en by opp prisen og fondet taper penger. Samme problem har man dersom en børs endrer på indekssammensetningen. I tillegg er det kostnader forbundet med bare det å passe på at fondet og indeksen til enhver tid er identisk.


Å følge en slik petimeterstrategi når en ikke trenger det er selvsagt ganske tåpelig. Jeg tror det kan være årsaken til det ikke har vært foreslått. For et kommersielt indeksfond kan det være juridiske grunner til å opptre slik, men for Oljefondet må selvsagt fokus være på lave kostnader. De er jo nettopp på grunn av lave kostnader at passiv forvaltning er bedre enn aktiv, så det er pussig at fondet har fått det for seg at passiv forvaltning eventuelt må drives så ineffektivt som tenkes kan.

Fraværet av kommersielle kunder gir faktisk en mulighet til lavere kostnader enn noe sammenlignbart fond ved å rebalansere porteføljen optimalt, slik at kostnadene minimeres. Heldigvis vil moderate avvik fra indeksen verken øker risikoen eller reduserer avkastningen nevneverdig. Det viktigste er faktisk at porteføljen er bredt diversifisert. Problemet med aktiv forvaltning er altså ikke i seg selv at en avviker fra indeksen, men at en betaler for det og at avvikene ofte ikke er tilfeldige.

NBIM anno 2010 tror heller ikke at et passivt fond kan tjene så mye på utlån av verdipapirer som i dag. Spekulanter som ønsker å vedde på kursnedgang i markedet betaler nemlig oljefondet for lån av aksjer og obligasjoner. Dette er en betydelig inntektskilde og reduserer fondets kostnader med over 25%. Det som er litt pussig er at fondet er helt uenig med seg selv anno 2003. Da sa de nemlig at ”Indeksporteføljene har dermed høyere utlånsinntekter enn aktive porteføljer”. Jeg tror vi kan stole mest på NBIM i 2003, før de hoppet ned i skyttergraven.

Basert på en forutsetning om at et passivt fond skal drives så ineffektivt som mulig kommer så fondet frem til at slik forvaltning koster minst like mye som aktiv forvaltning. Dette til tross for at det finnes kommersielle fond som opererer uten kostnader på grunn av utleie av verdipapirer. Ser en på kostnaden for kommersielle fond burde derfor en kostnad på 0,025% slik som illustrert, enkelt være innen rekkevidde for fondet (se min blog for beregningsgrunnlag).

Argumentasjonen til NBIM er i det hele tatt ensidig og preget av tunnelsyn. I stedet for å fremstille teorien på området helt feil burde man konsentrert seg om argumenter som faktisk taler for aktiv forvaltning. Feilene i fondets fremstilling er dessverre både tallrike og pinlige. For spesielt interesserte har jeg på min blogg listet opp 15 feil og unøyaktigheter i fondets årsrapport for 2010.

Det kan faktisk hevdes at den innbitte motstanden mot passiv forvaltning er en av årsakene til høye kostnader i oljefondet. En naturlig løsning på det er å la eksterne passive forvaltere konkurrere om forvaltningen av deler av fondet. Da oppnår en tre ting. For det første vil truselen om passiv forvaltning gi fondet troverdig forhandlingsstyrke overfor aktive forvaltere, samtidig som en presser prisene på passiv forvaltning. For det andre vil ikke evaluering av prestasjonene til den aktive delen av fondet lenger bli en gjettekonkurranse. Sist men ikke minst vil en slik løsning gi fleksibilitet for oppdragsgiver til enkelt å kunne endre på andelen aktiv i forhold til passiv.