Her er beregningene

Her er excelboken som er grunnlaget for dagens oppslag i DN. Tallene er litt forskjellige i forhold til de som er gjengitt i DN, etter at jeg oppdaget en liten punchefeil i arbeidsboken. Merk at forutsetningen er 0,02% kostnad på selve indeksforvaltningen. Når det i artikkelen for eksempel legges til grunn 0,04% avkastning på utlån, så tilsvarer det dermed en negativ kostnad (altså en inntekt) fra indeksforvaltning på 0,02%-0,04%=-0,02%. Det er dette tallet som settes inn i cellen «Kostnad indeksforvaltning» i regnearket.

Det er naturligvis heftet litt usikkerhet med hva som er et rimelig anslag for indeksforvaltning. Det beste hadde naturligvis vært å skille ut en del av fondet som et indeksfond og satt det ut på anbud. Mesteparten av fondet drives uansett i dag som et rent indeksfond.

I så fall ville Oljefondet hatt et investerbart alternativ med faktiske kostnader og inntekter fra utlån og vi hadde fått et objektivt mål på hva aktiv forvaltning faktisk bidrar med. Det vil neppe bli aktuelt å avvikle aktiv forvaltning helt, men et godt mål på bidraget fra aktiv forvaltning vil gi departementet og politikerne tilstrekkelig informasjon til å velge hvor stor andel som bør forvaltes passivt.

Send Email using MFMailComposer (Template)

1.) Import MessageUI
      import MessageUI

2.) Show Email
func showMail() {
let mailComposeViewController = configuredMailComposeViewController()
if MFMailComposeViewController.canSendMail() {
self.presentViewController(mailComposeViewController, animated: true, completion: nil)
} else {
self.showSendMailErrorAlert()
}

}

func configuredMailComposeViewController() -> MFMailComposeViewController {
let mailComposerVC = MFMailComposeViewController()
mailComposerVC.mailComposeDelegate = self // Extremely important to set the --mailComposeDelegate-- property, NOT the --delegate-- property

mailComposerVC.setToRecipients([""])

mailComposerVC.setSubject("")

mailComposerVC.setMessageBody("", isHTML: false)

return mailComposerVC
}

func showSendMailErrorAlert() {
let sendMailErrorAlert = UIAlertView(title: "Could Not Send Email", message: "Your device could not send e-mail. Please check e-mail configuration and try again.", delegate: self, cancelButtonTitle: "OK")
sendMailErrorAlert.show()
}

// MARK: MFMailComposeViewControllerDelegate Method
func mailComposeController(controller: MFMailComposeViewController!, didFinishWithResult result: MFMailComposeResult, error: NSError!) {

controller.dismissViewControllerAnimated(true, completion: nil)

switch result.value {
case MFMailComposeResultCancelled.value:
println("Mail cancelled")
case MFMailComposeResultSaved.value:
println("Mail saved")
case MFMailComposeResultSent.value:
println("Mail sent")
showAlertMessage("Your mail has been sent successfully")
case MFMailComposeResultFailed.value:
println("Mail sent failure: \(error.localizedDescription)")
showAlertMessage("Your mail has not been sent successfully")
default:
break
}

}

func showAlertMessage(value : String){
let alertController = UIAlertController(title: value, message:
nil, preferredStyle: UIAlertControllerStyle.Alert)

alertController.addAction(UIAlertAction(title: "Ok", style: UIAlertActionStyle.Default,handler: nil))

self.presentViewController(alertController, animated: true, completion: nil)
}


Sharpe bedre enn IR?!

Spesielt interesserte har kanskje lurt på hvorfor jeg i dagens DN hevder at Sharpe er bedre enn IR som mål på oljefondets prestasjon. Her kommer en kort forklaring.

I rapporten som omtales beregnes en IR med bare differansen mellom fondets og indeksens avkastning i telleren. Dette målet tar ikke hensyn til at fondet har tatt mer markedsrisiko enn indeksen i perioden. Fondet har tatt 6 % mer markedsrisiko enn indeksen, og har derfor en beta på 1,06.

En alternativ og bedre måte å beregne IR på er imidlertid å bruke «alfaen» i telleren. Da får vi et tall som er justert for risikoen som fondet faktisk har tatt. En del lærebøker definerer IR slik. Med alfa i telleren kalles målet alternativt for «appraisal ratio» (AR), og jeg vil bruke dette navnet i fortsettelsen for å skille begrepene.

La RO være avkastningen til oljefondet og RI avkastningen til indeksen. IR slik oljefondet har beregnet det vil da være proporsjonal med RO-RI, mens AR har RO-1,06*RI i telleren.

Poenget mitt var at når IR beregnes med RO-RI, så er faktisk Sharpe bedre. Jeg forutsetter her at Sharpe-målet er differansen mellom Sharpe for fondet og for indeksen, slik NBIM har gjort i rapporten.

Sharpe-differansen blir bedre fordi den fanger opp at fondet har tatt 6% mer markedsrisiko enn indeksen. Derfor er også volatiliteten til fondet 6 % høyere. Differansen mellom de to Sharp'ene vil dermed ta hensyn til denne ekstra risikoen på samme måte som alfa. Nevneren vil være litt forskjellig fra AR, men det har lite å si.

Om justert Sharpe er enda bedre kan sikkert diskuteres. Dette er et litt obskurt prestasjonsmål, men det tar altså hensyn til at fondet har opplevd mer ekstreme begivenheter enn indeksen. Dette er ikke identisk med at fondet har tatt mer markedsrisiko, men det er en viss sammenheng mellom disse egenskapene.

Det beste målet er imidlertid AR. Dette er også et av de det mest allment aksepterte prestasjonsmålene. AR er et af flereprestasjonsmål basert på alfa. Et alternativt mål er alfa delt på standardavviket til alfa-estimatet. Det er kanskje det beste målet.

Dette målet gir rett og slett t-verdien til alfaen, og er den høyere enn 2 betyr det at meravkastningen ikke skyldes tilfeldigheter. Oljefondets t-verdi er på rundt 0,35.

How to call a JavaEE REST web service with BASIC Authentication using jquery ajax

I don't really remember when I coded it, nor where I got it but I'm writing it here for future use :-)
Below is the code I use to test CORS, http://en.wikipedia.org/wiki/Cross-origin_resource_sharing.
<script
src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script type="text/javascript">
var $ = jQuery.noConflict();
 
$.aja(angry) {
cache: false,
crossDomain: true,
dataType: "json",
url: "http://czetsuya/myService/meMethod",
type: "GET",
success: function( jsonObj, textStatus, xhr ) {
var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>";
$( "#logMsgDiv" ).html( htmlContent );
},
beforeSend: function (xhr) {
xhr.setRequestHeader ("Authorization", "Basic " + btoa("username:password"));
},
error: function( xhr, textStatus, errorThrown ) {
console.log( "HTTP Status: " + xhr.status );
console.log( "Error textStatus: " + textStatus );
console.log( "Error thrown: " + errorThrown );
}
} );
</script>

And here are the javaEE filters.
import java.io.IOException;

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.container.PreMatching;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.Provider;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
* @author Edward P. Legaspi
**/
@Provider
@PreMatching
public class RESTCorsRequestFilter implements ContainerRequestFilter {

private final static Logger log = LoggerFactory
.getLogger(RESTCorsRequestFilter.class.getName());

@Override
public void filter(ContainerRequestContext requestCtx) throws IOException {
// When HttpMethod comes as OPTIONS, just acknowledge that it accepts...
if (requestCtx.getRequest().getMethod().equals("OPTIONS")) {
log.debug("HTTP Method (OPTIONS) - Detected!");

// Just send a OK signal back to the browser
requestCtx.abortWith(Response.status(Response.Status.OK).build());
}
}

}

import java.io.IOException;

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerResponseContext;
import javax.ws.rs.container.ContainerResponseFilter;
import javax.ws.rs.container.PreMatching;
import javax.ws.rs.ext.Provider;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
* @author Edward P. Legaspi
**/
@Provider
@PreMatching
public class RESTCorsResponseFilter implements ContainerResponseFilter {
private final static Logger log = LoggerFactory
.getLogger(RESTCorsResponseFilter.class.getName());

@Override
public void filter(ContainerRequestContext requestCtx,
ContainerResponseContext responseCtx) throws IOException {
log.debug("Adding CORS to the response.");

responseCtx.getHeaders().add("Access-Control-Allow-Origin", "*");
responseCtx.getHeaders()
.add("Access-Control-Allow-Credentials", "true");
responseCtx.getHeaders().add("Access-Control-Allow-Methods",
"GET, POST, DELETE, PUT");
}

}

If you are using RESTEasy just like I'm usually am. You can take advantage of the already available CorsFilter class:
package com.weddinghighway.api.rest.filter;

import javax.ws.rs.core.Feature;
import javax.ws.rs.core.FeatureContext;
import javax.ws.rs.ext.Provider;

import org.jboss.resteasy.plugins.interceptors.CorsFilter;

/**
* @author Edward P. Legaspi
* @created 5 Oct 2017
*/

@Provider
public class RESTCorsResponseFilter implements Feature {

@Override
public boolean configure(FeatureContext context) {
CorsFilter corsFilter = new CorsFilter();
corsFilter.getAllowedOrigins().add("*");
context.register(corsFilter);
return true;
}
}

Note: If one fails, then just try the other :-)

A Simple Model of Multiple Equilibrium Business Cycles

Noah Smith has a nice piece here on Roger Farmer's view of the business cycle.

The basic idea is that, absent intervention, economic slumps (as measured, say, by an elevated rate of unemployment) can persist for a very long time owing to a self-reinforcing feedback effect. The economy can get stuck in what game theorists would label a "bad equilibrium." This interpretation seems to me to be highly consistent with Keynes' (1936) own view on the matter as expressed in this passage:
[I]t is an outstanding characteristic of the economic system in which we live that, whilst it is subject to severe fluctuations in respect of output and employment, it is not violently unstable. Indeed it seems capable of remaining in a chronic condition of subnormal activity for a considerable period without any marked tendency either towards recovery or towards complete collapse.
Now, there is more than one way to explain how an economy can get stuck in a rut. A favorite argument on the right is that recessions are naturally self-correcting if the market is left to its own devices and that prolonged slumps are attributable primarily to the misguided, clumsy and uninformed attempts on the part of government policymakers to "fix" the problem (see here).

But there is another view. The view begins with an observation from game theory: most structures that govern social interaction permit many possible outcomes--outcomes that have nothing to do with the existence of any fundamental uncertainty. If we think of the macroeconomy as a collection of individuals interacting in a large "market game," then the same principle holds--we shouldn't be surprised to discover that many equilibrium outcomes are possible. This idea forms the basis of Roger's pioneering book: The Macroeconomics of Self-Fulfilling Prophecies.

According to Noah, "[Farmer's] approach is mathematically sophisticated, and uses the complex modern techniques that are ubiquitous in academic literature." While this is certainly true, I think there is an easy way to teach the basic idea using standard undergraduate teaching tools. In what follows, I assume that the reader has some knowledge of indifference curves, budget sets, and production possibilities frontiers.

The framework is the basic static "income-leisure" model. A representative household has a fixed unit of time that can be devoted to one of two uses: market work or home work. The household values the consumption of two goods: a market-produced good and a home-produced good. An individual household takes the return to market work as exogenous. If the (expected) return to market work fluctuates randomly over time owing to (say) productivity shocks, tax shocks, news shocks, etc., then the choices that households make can be depicted with the following diagram:


In the diagram above, the x-axis measures time devoted to home work (so that the distance n* is a measure of employment) and the y-axis measures output (real income). The straight lines correspond to the household's budget set (which corresponds to the production possibilities frontier for a linear technology). The curved lines represent indifference curves--how the household values the market and home goods. This is, in essence, the RBC theory of the business cycle: as the returns to economic activities vary over time, people rationally substitute into higher return activities and out of lower return activities. If these shocks are correlated across households, then in the aggregate we observe cyclical fluctuations in output and employment.

Is it possible to model Roger's world view using the same apparatus? Yes, it is. One way to do this is to imagine a fixed production possibilities frontier that exhibits increasing returns to scale. The basic idea is that the return to labor (more generally, any economic activity) is higher when everyone is working hard and vice-versa. The following diagram formalizes this idea.


The RBC view is that there are two separate production functions shifting up and down (with the y-intercept moving between z_H and z_L. But suppose that the production function is in fact stable and that it takes the shape as traced by the solid kinked line connecting z_H to 1.0. The kink occurs at some critical level of employment labeled n_C. The individual's return to labor is expected to be high IF he expects aggregate employment to exceed n_C. Conversely, the individual's expected return to labor is low IF he expects aggregate employment to fall short of n_C.

Given this setup, whether the economy ends up at point A (the high-level equilibrium) or at point B (the low-level equilibrium) depends entirely on "animal spirits." That is, if the community as whole expects B then it is individually rational to choose B which, if done en masse, confirms the initial expectation. Likewise for point A. The allocations and prices associated with points A and B constitute self-fulfilling prophecies.

It is interesting to note that these two very different hypotheses can generate output and employment fluctuations that are observationally equivalent. How would the poor econometrician, uninformed of the true structure of the economy, distinguish between these two competing hypotheses? They both generate procyclical employment, productivity and wages. And if a slump lasts for an unusually long time well, RBC theory can claim that's just because people are rationally expecting a large future penalty (tax) on their employment activities (or, in the context of a search model, their recruiting investments). And if the economy oscillates randomly between A and B at high frequency, the Keynesian theory can claim that this behavior is a part of a "sunspot" equilibrium where fluctuations are driven by "animal spirits."

This observational equivalence problem is unfortunate because the two hypotheses have very different policy implications. The first interpretation more or less supports a laissez-faire approach, while the second interpretation suggests a fruitful role for a well-designed fiscal policy (in this model, even the credible threat of employing idle workers can keep the economy at point A without any actual intervention).

Isn't macroeconomics fun?

*****

PS. I lifted these diagrams from my free online macro lecture notes, available here. (Warning: the notes are in desperate need of correction and updating. I'll get to it one day.)

Fakta mot følelser

Reklamemannen Ingebrigt Steen Jensens har et viktig poeng; regnestykker og fornuft er dårlige verktøy for å skape engasjerende folkebevegelser.

En politisk kamp for statlig eierskap vinnes mest effektivt ved å appellere til følelser. Fortiden ser det ut til at motstanderne av nedsalg ligger klart best an; bare to av ti nordmenn vil at flytoget skal selges i følge Nationen.

Men ulempen med sterke følelser er jo at de av og til seirer over fornuften. Når følelsene får bestemme finnes det ikke noe argument i verden som kan trenge gjennom. Følelser er derfor et meget presist og velegnet verktøyet til å overbevise velgere, men selv de sterkeste følelser klarer ikke å endre på fakta.

Før Platon var antakeligvis magefølelsen det helt avgjørende argumentet for at jorda var flat. I århundrene som fulgte hadde astronomene en veldig sterk følelse av at jorda var universets senter. Legene på 1800-tallet hadde tilsynelatende også sterke følelser for at årelating og ikke hygiene var den beste behandlingen mot barselfeber, selv etter at åpenbare bevis på det motsatte forelå. Vitenskapens historie full av eksempler der sterke følelser har vist seg å være feil. Det moderne samfunnet er et resultat av forskere som har trosset følelsene og forfulgt fornuften.

Det betyr ikke at magefølelsen aldri er riktig. Men når vi har tilgang på fakta, og når fakta ikke stemmer med magefølelsen, ja da bør vi stole på fakta. Det er når følelser og fornuft er uforenlig at følelsesavgjørelser blir feil.

Det er nok sterke følelser på begge sider i debatten om statlig nedsalg, men for tiden ser det ut til at motstanderne går av med en ganske klar seier i denne konkurransen.

Arbeiderpartiet og fagbevegelsen hevder for eksempel stadig vekk at nedsalg innebærer tapte inntekter. Det finnes ikke holdbar dokumentasjon for påstanden, men magefølelsen er kanskje slik?

I Aftenbladet skrev Morten Strøksnes en artikkel som tok av på sosiale medier. Den het «Vi som solgte landet», og var et harmdirrende oppgjør med alt som hadde med statlig nedsalg å gjøre. Kronikken var artig den, men riktig fakta var vist ikke prioritert særlig høyt. For eksempel hadde forfatteren ikke fått med seg at gigantutbyttet i Cermaq i fjor skyldtes salg av en stor del av virksomheten, og kjørte dermed i veg med en litt pinlig harselas om hvor lav prisen var i forhold til utbytte. Strøksnes mente også å vite at «Salgene er som regel ikke lønnsomme på lang sikt», men den påstanden finnes det jo ikke dekning for.

Strøksnes gav seg imidlertid ikke der. To uker senere kommer oppfølgeren «Vi som solgte havet» med et angrep på professorene Frank Asche og Ragnar Tveterås så følelsesladet at det frister å finne opp ordet «putekronikk». Bakgrunnen for det hemingsløse angrepet var at én av professorene har ledet arbeidet med en NOU om sjømatindustrien. Heller ikke i denne artikkelen ser de ut til at fakta er tillagt særlig vekt. Strøksnes mener for eksempel å vite at «Samfunnsmessige hensyn er radert ut av deres regnestykke.», men på side 32 i utredningen finner vi kapittelet «Samfunnskontrakten».

Ingebrigt Steen Jensen har kanskje rett i at «… det går sikkert an å gjøre Norge som nasjon om til et regnestykke […] men da skjønner ikke folk noen ting.». Jonas Gahr Støre er inne på det samme: «Befolkningen forstår rett og slett ikke argumentene for et salg». Men at følelser er enklere å kommunisere enn kompliserte argumenter innebærer jo ikke at argumentene er ugyldige.

Et av argumentene som taler for mindre statlig eierandel er for eksempel risiko. Statens hovedinntektskilde er skatteinntekter. Når staten i tillegg er stor eier av norske selskap, vil en eventuell nedtur gi tap både på aksjer og skatteinntekter. Risikoen reduseres ved å spre investeringene i utlandet.

Jeg tror Støre og Steen Jensen har veldig rett i at det ikke er så mange velgere som forstår dette argumentet, men det blir vel ikke dermed mindre sant? Det finnes selvsagt argumenter på begge sider av debatten som krever både litt refleksjon og kunnskap og som derfor ikke alle vil forstå. Er disse fakta da irrelevante?

Bi-level TIFFs and the tale of the unexpectedly early patch

Today's release of MS15-016 (CVE-2015-0061) fixes another of the series of browser memory disclosure bugs found with afl-fuzz - this time, related to the handling of bi-level (1-bpp) TIFFs in Internet Explorer (yup, MSIE displays TIFFs!). You can check out a simple proof-of-concept here, or simply enjoy this screenshot of eight subsequent renderings of the same TIFF file:





The vulnerability is conceptually similar to other previously-identified problems with GIF and JPEG handling in popular browsers (example 1, example 2), with the SOS handling bug in libjpeg, or the DHT bug in libjpeg-turbo (details here) - so I will try not to repeat the same points in this post.


Instead, I wanted to take note of what really sets this bug apart: Microsoft has addressed it in precisely 60 days, counting form my initial e-mail to the availability of a patch! This struck me as a big deal: although vulnerability research is not my full-time job, I do have a decent sample size - and I don't think I have seen this happen for any of the few dozen MSIE bugs that I reported to MSRC over the past few years. The average patch time always seemed to be closer to 6+ months - coupled with what the somewhat odd practice of withholding attribution in security bulletins and engaging in seemingly punitive PR outreach if the reporter ever went public before that.


I am very excited and hopeful that rapid patching is the new norm - and huge thanks to MSRC folks if so :-)

Symbolic execution in vuln research

There is no serious disagreement that symbolic execution has a remarkable potential for programatically detecting broad classes of security vulnerabilities in modern software. Fuzzing, in comparison, is an extremely crude tool: it's the banging-two-rocks-together way of doing business, as contrasted with brain surgery.



Because of this, it comes as no surprise that for the past decade or so, the topic of symbolic execution and related techniques has been the mainstay of almost every single self-respecting security conference around the globe. The tone of such presentations is often lofty: the slides and research papers are frequently accompanied by claims of extraordinary results and the proclamations of the imminent demise of less sophisticated tools.



Yet, despite the crippling and obvious limitations of fuzzing and the virtues of symbolic execution, there is one jarring discord: I'm fairly certain that probably around 70% of all remote code execution vulnerabilities disclosed in the past few years trace back to fairly "dumb" fuzzing tools, with the pattern showing little change over time. The remaining 30% is attributable almost exclusively to manual work - be it systematic code reviews, or just aimlessly poking the application in hopes of seeing it come apart. When you dig through public bug trackers, vendor advisories, and CVE assignments, the mark left by symbolic execution can be seen only with a magnifying glass.



This is an odd discrepancy, and one that is sometimes blamed on the practitioners being backwardly, stubborn, and ignorant. This may be true, but only to a very limited extent; ultimately, most geeks are quick to embrace the tools that serve them well. I think that the disconnect has its roots elsewhere:



  1. The code behind many of the most-cited, seminal publications on security-themed symbolic execution remains non-public; this is particularly true for Mayhem and SAGE. Implementation secrecy is fairly atypical in the security community, is usually viewed with distrust, and makes it difficult to independently evaluate, replicate, or build on top of the published results.



  2. The research often fails to fully acknowledge the limitations of the underlying methods - while seemingly being designed to work around these flaws. For example, the famed Mayhem experiment helped identify thousands of bugs, but most of them seemed to be remarkably trivial and affected only very obscure, seldom-used software packages with no significance to security. It is likely that the framework struggled with more practical issues in higher-value targets - a prospect that, especially if not addressed head-on, can lead to cynical responses and discourage further research.



  3. Any published comparisons to more established vulnerability-hunting techniques are almost always retrospective; for example, after the discovery of Heartbleed, several teams have claimed that their tools would have found the bug. But analyses that look at ways to reach an already-known fault condition are very susceptible to cognitive bias. Perhaps more importantly, it is always tempting to ask why the tools are not tasked with producing a steady stream of similarly high-impact, headline-grabbing bugs.



The uses of symbolic execution, concolic execution, static analysis, and other emerging technologies to spot substantial vulnerabilities in complex, unstructured, and non-annotated code are still in their infancy. The techniques suffer from many performance trade-offs and failure modes, and while there is no doubt that they will shape the future of infosec, thoughtful introspection will probably get us there sooner than bold claims with little or no follow-through. We need to move toward open-source frameworks, verifiable results, and solutions that work effortlessly and reliably for everyone, against almost any target. That's the domain where the traditional tools truly shine, and that's why they scale so well.



Ultimately, the key to winning the hearts and minds of practitioners is very simple: you need to show them how the proposed approach finds new, interesting bugs in the software they care about.

Fedcoin: On the Desirability of a Government Cryptocurrency


It was J.P. Koning's blog post on Fedcoin that first got me thinking seriously of the potential societal benefits of government-sponsored cryptocurrency. When I was invited to speak at the International Workshop on P2P Financial Systems 2015, I thought that a talk on Fedcoin would be an interesting and provocative way to start the conference. You can view my presentation here, but what I'd like to do in this post is clarify some of the arguments I made there.

As I described in this earlier post, I view a payment system as a protocol (a set of rules) for debiting and crediting accounts, I view money as widely agreed-upon record-keeping device, and I view monetary policy as a protocol designed to manage the supply of money over time.

The cryptocurrency Bitcoin is a payment system with monetary objects called bitcoin and a monetary policy prescribed as deterministic path for the supply of bitcoin converging to a finite upper limit. I view Bitcoin as a potentially promising payment system, saddled with a less-than-ideal money and monetary policy. As the protocol currently stands, bitcoins are potentially a better long-run store of value than non-interest-bearing USD. But if long-run store of value is what you are looking for, we already have a set of income-generating assets that do a pretty good job at that (stocks, bonds, real estate, etc.). [For a comparison of the rates of return on stocks vs. gold, look here.]

Let's set aside Bitcoin's monetary policy for now and concentrate on the bitcoin monetary object. What is the main problem with bitcoin as a monetary instrument in an economy like the U.S.? It is the same problem we face using any foreign currency in domestic transactions--the exchange rate is volatile and unpredictable. (And our experience with floating exchange rates tells us that this volatility will never go away.) Bill Gates hits the nail on the head in his Reddit AMA:
Bitcoin is an exciting new technology. For our Foundation work we are doing digital currency to help the poor get banking services. We don't use bitcoin specifically for two reasons. One is that the poor shouldn't have a currency whose value goes up and down a lot compared to their local currency. 
For better or worse, like it or not, the USD is the U.S. economy's unit of account--the numeraire--the common benchmark relative to which the value of various goods and services are measured and contractual terms stipulated. With a floating exchange rate, managing cash flow becomes problematic when (say) revenue is in BTC and obligations are in USD. Intermediaries like Bitreserve can mitigate some this risk but, of course, at an added expense. Hedging foreign exchange risk is costly--a cost that is absent when the exchange rate is fixed.

And so, here is where the idea of Fedcoin comes in. Imagine that the Fed, as the core developer, makes available an open-source Bitcoin-like protocol (suitably modified) called Fedcoin. The key point is this: the Fed is in the unique position to credibly fix the exchange rate between Fedcoin and the USD (the exchange rate could be anything, but let's assume par).

What justifies my claim that the Fed has a comparative advantage over some private enterprise that issues (say) BTC backed by USD at a fixed exchange rate? The problem with such an enterprise is precisely the problem faced by countries that try to peg their currency unilaterally to some other currency. Unilateral fixed exchange rate systems are inherently unstable because the agency fixing the BTC/USD exchange rate cannot credibly commit not to run out of USD reserves to meet redemption waves of all possible sizes. In fact, the structure invites a speculative attack.

In contrast, the issue of running out of USD or Fedcoin to maintain a fixed exchange rate poses absolutely no problem for the Fed because it can issue as many of these two objects as is needed to defend the peg (this would obviously call for a modification in the Bitcoin protocol in terms of what parameters govern the issuance of Fedcoin). Ask yourself this: what determines the following fixed-exchange rate system:


Do you ever worry that your Lincoln might trade at a discount relative to (say) Washingtons? If someone ever offered you only 4 Washingtons for your 1 Lincoln, you have the option of approaching the Fed and asking for a 5:1 exchange rate--the exchange rate you are used to. Understanding this, people will generally not try to violate the prevailing fixed exchange rate system. The system is credible because the Fed issues each of these "currencies." Now, just think of Fedcoin as another denomination (with an exchange rate fixed at par).

Now, I'm not sure if Fedcoin should be a variant of Bitcoin or some other protocol (like Ripple). In particular, I have some serious reservations about the efficiency of proof-of-work mechanisms. But let's set these concerns aside for the moment and ask how this program might be implemented in general terms.

First, the Fedcoin protocol could be made open source, primarily for the purpose of transparency. The Fed should only honor the fixed exchange rate for the version of the software it prefers. People can download free wallet applications, just as they do now for Bitcoin. Banks or ATMs can serve as exchanges where people can load up their Fedcoin wallets in exchange for USD cash or bank deposits. There is a question of how much to reward miners and whether the Fed itself should contribute hashing power for the purpose of mining. These are details. The point is that it could be done.

Of course, just because Fedcoin is feasible does not mean it is desirable. First, from the perspective of the Fed, because Fedcoin can be viewed as just another denomination of currency, its existence in no way inhibits the conduct of monetary policy (which is concerned with managing the total supply of money and not its composition). In fact, Fedcoin gives the Fed an added tool: the ability to conveniently pay interest on currency. In addition, Koning argues that Fedcoin is likely to displace paper money and, to the extent it does, will lower the cost of maintaining a paper money supply as part of the payment system.

What about consumers and businesses? They will have all the benefits of Bitcoin--low cost, P2P transactions to anyone in the world with the appropriate wallet software and access to the internet. Moreover, domestics will be spared of exchange rate volatility. Because Fedcoin wallets, like cash wallets, are permissionless and free, even people without proper ID can utilize the product without subjecting themselves to an onerous application process. Finally, because Fedcoin, like cash, is a "push" (rather than "pull") payment system, it affords greater security against fraud (as when someone hacks into your account and pulls money out without your knowledge).

In short, Fedcoin is essentially just like digital cash. Except in one important respect. Physical cash is still a superior technology for those who demand anonymity (see A Theory of Transactions Privacy). Cash does not leave a paper trail, but Fedcoin (and Bitcoin) do leave digital trails. In fact, this is an excellent reason for why Fedcoin should be spared any KYC restrictions. First, the government seems able to live with not imposing KYC on physical cash transactions--why should it insist on KYC for digital cash transactions? And second, digital cash leaves an digital trail making it easier for law enforcement to track illicit trades. Understanding this, it is unlikely that Fedcoin will be the preferred vehicle to finance illegal activities.

Finally, the proposal for Fedcoin should in no way be construed as a backdoor attempt to legislate competing cryptocurrencies out of existence. The purpose of Fedcoin is to compete with other cryptocurrencies--to provide a property that no other cryptocurrency can offer (guaranteed exchange rate stability with the USD). Adopting Fedcoin means accepting the monetary policy that supports it. To the extent that people are uncomfortable with Fed monetary policy, they may want to trust their money (if not their wealth) with alternative protocols. People should be (and are) free to do so.

Postscript, February 06, 2015.

A number of people have asked me why we would need a distributed/decentralised consensus architecture to support a FedCoin. In the talk I gave in Frankfurt, I actually made two proposals. The first proposal was called "Fedwire for All." This is basically digital cash maintained on a closed centralized ledger, like Fedwire. It would be extremely cheap and efficient, far more efficient that Bitcoin. But of course, it does not quite replicate the properties of physical cash in two respects. First, as with TreasuryDirect, the Fedwire accounts would not be permissionless. People would have to present IDs, go through an application procedure, etc. Second, the Fed is unlikely to look the other way (as it does with cash) in terms of KYC restrictions. So, to the extent that these two latter properties are desirable, I thought (at the time I wrote this piece) that we needed to move beyond Fedwire-for-All to Fedcoin. There may, of course, be other ways to implement these properties. I'm all ears!

afl-fuzz: black-box binary fuzzing, perf improvements, and more

I had quite a few posts about afl-fuzz recently, mostly focusing on individual, newly-shipping features (say, the fork server, the crash explorer, or the grammar reconstruction logic). But this probably gets boring for people not interested in the tool, and doesn't necessarily add up to a coherent picture for those who do.



To trim down on AFL-themed posts, I decided to write down a technical summary of all the internals and maintain it as a part of the AFL home page. The document talks about quite a few different things, including:


  • The newly-added support for guided fuzzing of black-box, closed-source binaries (yes, it finally happened!),


  • Info about effector maps - a new feature that offers significant performance improvements for many types of fuzzing jobs,


  • Some hard data comparing the efficiency of evolutionary fuzzing and AFL-style instrumentation versus more traditional tools,


  • Discussion of many other details that have not been documented in depth until now - queue culling, file minimization, etc.


I'll try to show a bit more restraint with AFL-related news on this blog from now on, so if you want to stay in the loop on key developments, consider signing up for the afl-users@ mailing list.

Tåpelig taxikø på Gardermoen

Taxisystemet på Gardermoen fungerer åpenbart ikke. NRK melder at bilene ikke vil ta lokale turer fordi de da må stille seg bak i køen og «vente i to-tre timer» i følge Norgestaxi.

For det første har Kenneth Simonsen, lederen i Taxi Depot AS som organiserer køen, et godt poeng:

– Det er valgfritt å kjøre inn på Gardermoen. De lokale turene fordeler seg på alle sjåførene, og det er ikke slik at noen kan nekte. Det har jeg ingen forståelse for, sier Simonsen.

Men det som kanskje er enda merkeligere, er at det kan være lønnsomt for en taxi å stå tre timer i kø for å kjøre én tur. Det vil jo si at det går med en halv arbeidsdag til å kjøre Gardermoen-Oslo. Hvordan i alle dager kan dette være lønnsomt?

Årsaken ligger nok i taxisystemet. Taxiene stiller seg i én kø, og de fleste velger formodentlig den første og beste taxien. Med et slikt system er det ikke overraskende at konkurransen ikke fungerer. De dyreste taxiselskapene har da selvsagt incentiv til å ha flest biler i køen. Resultatet er ikke overraskende en unødvendig, ressurssløsende og endeløs kø av taxier. De billigste selskapene forsvinner.

Det beste hadde kanskje vært om hvert taxiselskap hadde egne holdeplasser i samme område der prisen til Oslo var tydelig skiltet. Det spørs imidlertid om det er praktisk mulig.

Det ser dermed ut til at løsningen ligger i problemet. De korte turene blir svært kostbare for de som står lenge i kø. Dette kan utnyttes til å gi sjåførene de rette incentivene. Innenfor mikroøkonomi er slike mekanismer kjent under forkortelsen IC («Incentive Compatibility Constraint»).

Ved å tvinge alle taxiene til å akseptere korte turer, blir det mindre lønnsomt å sløse samfunnets ressurser på å stå lenge i kø. Mindre kø på Gardermoen vil også gi de billigste selskapene bedre incentiv til å delta.

Taxi Depot bør derfor utestenge taxiselskaper for en lengre periode, dersom sjåfører etter advarsel fortsetter å bryte reglene. Siden det formodentlig er langt mellom de korte turene, må kostnaden ved å avvise dem være høy.

Jeg regner med at strekningen Gardermoen-Oslo uansett er såpass lønnsom at IR er tilfredsstilt ...

Money and Payments, or How we Move Marbles.

I'm writing this to serve as background for my next post on Fedcoin. If you haven't thought much about the money and payments system, I hope you'll find this a useful primer explaining some basic principles.

I view the payments system as a protocol (a set of rules) for debiting and crediting accounts. I view money as an object that is used to debit/credit accounts in a payments system. I view monetary policy as a protocol to manage the supply of money over time. Collectively, these objects form a money and payments system.

One way to visualize the money and payments system is as a compartmentalized box of marbles, displayed to the right. The marbles represent agreed-upon monetary tokens--record-keeping devices (see also the discussion here). The compartments represent individual accounts. Paying for a good or service corresponds to moving marbles from one account to another.

What makes a good marble? What is the best way to manage the supply of marbles over time? And what is the best way to move marbles around from account to account? There are books devoted to addressing these questions.

A good marble should have easily recognizable and understandable properties. This is one reason why complicated securities make poor money. Fiat money and senior claims to fiat money make good money along this dimension because everyone knows that fiat money is a claim to nothing (so there is no asymmetric information, a property emphasized by  Gorton and Pennaccchi, 1990). Gold, even if it is coined, is not especially good along this dimension because it is heterogeneous in quality (and it's not costless to have it assayed, see here). Plus, precious-metal coins can be shaved (although, there is no motivation to shave token coins).

A good marble should also be durable, divisible, and difficult to counterfeit. Paper money issued in different denominations can have these properties. And while virtual money can easily be made durable and divisible, it is extremely easy to counterfeit. For this reason, trusted intermediaries are needed to create and manage a virtual money supply (at least, up to the invention of Bitcoin). Gold (and other precious metals) have these desired properties. But to the extent that these metals have competing uses, it is inefficient to have them serve as accounting marbles. Unless you don't trust the intermediaries that manage the fiat-marble supply, that is. (Unfortunately, there have been enough failed experiments along this dimension to warrant some skepticism.)

How should the supply of marbles be managed over time? Advocates of the gold standard want the supply to be determined by the market sector (through mining). This protocol means that the supply of money is essentially fixed over short periods of time, and grows relatively slowly over long periods of time (although, big new discoveries have often led to inflationary episodes). If the demand for money increases suddenly and dramatically (as it is prone to do during a financial crisis), then the consequence of a fixed short-run supply of money is a sudden and unanticipated deflation. Because nominal debt obligations are not typically indexed to the price-level, the effect of this protocol is to make a recession larger than it otherwise might be. The idea behind a central bank as lender-of-last resort is to have an agency that can temporarily increase the supply of money (in exchange for "excessively" discounted private paper) to meet the elevated demand for money so as to stabilize the price-level. In effect, such a policy, if executed correctly, can replace the missing state-contingency in nominal debt contracts. Whether a central bank can be trusted to manage such a policy in responsible and competent manner is, of course, another question. Let's just say that there are costs and benefits to either approach and that reasonable people can reasonably disagree.

Apart from cyclical adjustments the money supply, there is the question of whether money-printing should ever be used to finance operating expenditure (seigniorage). Generally, the answer is "yes"--at least, once again, if it is done responsibly. It is of some interest to note that the Bitcoin protocol uses seigniorage to finance payment processors (miners). The idea here, I suppose, is that the protocol, which is a computer program and not a politician--can be trusted to manage the inflation-tax optimally. That is, at least for a limited amount of time--the long-run supply of bitcoin is presently capped at 21M units.

Alright, so how about the payments system. What are the different ways of rearranging marbles in a ledger?

The most basic method of payment is the combination of a physical cash exchanged in a P2P meeting. When I buy my Starbucks latte, I debit my wallet of cash and Starbucks credits its cash register by the same amount. The ledger that describes the distribution of physical cash holdings (and the histories of how each unit of cash has moved across accounts over time) is hidden from all of us. This is why cash transactions are associated with a degree of anonymity.

Another popular way to make a payment is via a debit card. In this case, Starbucks and I have accounts in a ledger that is managed by the banking system. These accounts are stocked with virtual book-entry objects. When I pay for my latte with a debit card, I  send a message to the banking system asking it to debit my account and credit the merchant's account. In this protocol, the banking system verifies that I have sufficient account balances and executes the funds transfer. The protocol obviously relies on the use trusted intermediaries to manage the ledger and keep it secure. Also, because bank accounts are associated with individual identities and because centralized ledger transactions can be recorded, there is no anonymity associated with the use of this payments protocol.

The Bitcoin protocol is an amazing invention--I'm on record as describing it as a stroke of genius. The amazing part of it is not it's monetary policy (which I think is flawed). Its main contribution is to permit P2P payments in digital cash without the use of a centralized ledger managed by a trusted intermediary. (In fact, the economic implications of this invention extend far beyond payments; see Ethereum, for example).

What makes digital cash without an intermediary so difficult? Think of digital cash as a computer file that reads "one dollar, SN 24030283." Suppose I want to email this digital file to you in payment for services rendered. When I take a dollar bill out of my pocket and send it to the merchant, there is no question of that dollar bill leaving my pocket. For the same thing to be true of my digital dollar, I would be required to destroy my computer file "one dollar, SN 24030383" after sending it to the merchant. The problem is that  people are likely to make endless copies of their digital money files. In other words, digital money can be costlessly counterfeited. And this is why we make use of intermediaries to handle payments in a virtual ledger. (We don't expect the intermediaries to counterfeit our balances...our main complaint with them is that they charge too much for their accounting services!)

There is no need to get too far into the details of how the Bitcoin protocol manages this feat. If you are interested, you can consult this book by the inspirational Andreas Antonopoulos. The main idea behind the protocol is a distributed public ledger (called the block chain) that is updated and made secure through the collective efforts of decentralized payment processors (called miners). I find it interesting how the Bitcoin consensus mechanism resembles, in spirit at least, the communal record-keeping practices of ancient gift-giving societies. In a gift-giving society, who contributes what to the collective good is recorded on a distributed network of brains. This is easy to do in small societies because there's not much to keep track of and verbal communication is sufficient to keep all nodes updated.

I want to end with a a couple of notes. First, isn't it interesting to note the coexistence of so many different monies and payments systems? Even today, a great deal of economic activity among small social networks (family, close friends, etc.) continues to be supported by gift-giving principles (including the threat of ostracism for bad behavior). This coexistence is likely to remain going forward and I think that open competition is probably the best way for society to determine the optimal mix.

It is also interesting to note that almost every money and payments system requires some degree of trust. This is also true of Bitcoin. In particular, the vast majority of Bitcoin users cannot read C++ and even for those that can, most are not about to go and check all 30MB (or so) of the Bitcoin source code. Nor will most people know what to do with a 30GB (and growing) block chain. Core developers? Mining coalitions? Who are these agents and why should they be trusted? The protocol cannot be changed...really? It won't be changed...really? It's just software, my friend. There's no guarantee that a consensus will not form in the future to alter the program in a materially significant way that some users will not desire. The same holds true for any consensus protocol, including the Federal Reserve Act of 1913 and the U.S. Constitution.

In my view, people will come to trust Bitcoin (or not) depending on its historical performance as a money and payments system. This is perfectly natural. It is not necessary, for example, that a person learns precisely how an internal combustion engine works before operating a motor vehicle. Most people drive cars because our experiences and observations tell  us we can trust them to work. And so it is with money and payments systems.

Update April 11, 2015.

For an excellent explanation of the modern payment system, see here: A Simple Explanation of How Money Moves Around the Banking System, by Richard Gendal Brown.