iOS Donut Progress Bar

The iOS Donut Progress Bar is an iOS library that you can integrate in your project to present data in a graphical way. Furthermore, it will allow you to get a live preview of the design right in the Interface Builder.

Inside the Interface Builder:


iOS Simulator:





Features

  • Customizable width and color for line and border of the outer/inner circle.
  • Customizable text label.
  • Customizable color, opacity, radius, and offset for shadow of the inner and outer circle
  • Supports the iOS 8 adaptive layout
  • You can change all of this in the interface builder or in runtime.

Example Usage In Runtime

circleView.outerRadius = 20
circleView.oColor = UIColor(red: 244/255, green: 156/255, blue: 45/255, alpha: 1)
circleView.percentage = 75
circleView.fontSize = 25
circleView.noOfDecimals = 0
circleView.animateCircle(1.0)

To get the sample code, please visit this site.

Prestasjonsmål som passer


Mener Finansdepartementet at pengeplasseringer utenlands bør måles på en annen måte enn plasseringer her hjemme?

I den årlige stortingsmeldingen om statens to store fond er avkastningen av en eller annen grunn risikojustert for Folketrygdfondet men ikke for Oljefondet.

Det hadde vært interessant å vite hvorfor. I samme stortingsmelding og i samme kapittel presenteres resultatene til begge fondene rett etter hverandre, men med to forskjellige mål på meravkastning. Folketrygdfondets avkastning er risikojustert, men det er ikke gjort for Oljefondet.

For den internasjonale porteføljen i Oljefondet har departementet valgt å bruke «Information Ratio» (IR) som mål, mens for norske investeringer er tilsynelatende det beste målet «Sharpe Ratio».

Problemet med IR er at den ikke tar hensyn til markedsrisiko. Oljefondet skal i utgangspunktet holde 60 prosent aksjer, men la oss si at fondet i det skjulte holdt fem prosent mer aksjer enn det skulle. Når børsene stiger med én prosent, så øker fondet dermed med 0,65 prosent, og ikke 0,6 prosent slik det var ment.

IR avslører ikke dette, så i gode tider vil det se ut som om fondet slår markedet. Men en slik meravkastning er åpenbart et resultat av økt risiko, og har ingen ting med dyktighet å gjøre.

Det er fullt mulig å øke risikoen på denne måten uten å gå utover risikorammene. Faktisk så har oljefondet tatt ca. seks prosent for mye risiko uten å bryte rammene, så eksemplet over ligger ganske nært virkeligheten.

Men det er selvsagt ingen grunn til å overdramatisere dette. Det lar seg fint gjøre å regne ut risikojustert avkastning for Oljefondet på egen hånd, og det har jeg også gjort. I tabellen ser vi at fondets prestasjoner like gjerne kan skyldes tilfeldigheter.

Det er altså mulig å komme til den konklusjon at Oljefondet har slått indeksen, men da må vi sette på IR-brillene så vi ikke får øye på markedsrisikoen.

Folketrygdfondet skal imidlertid av en eller annen grunn vurderes med åpne øyne. Her brukes nemlig, Sharpe Ratio, som er forholdet mellom avkastning og total risiko. Dersom fondet forsøker å pumpe opp avkastningen med markedsrisiko vil både telleren og nevneren øke omtrent like mye, og da blir resultatet uendret. Folketrygdfondets mål innebærer dermed en justering for markedsrisiko som mangler i Oljefondets mål.

Det er også interessant å merke seg fra tabellen at Folketrygdfondet tar seg langt bedre ut med mål som tar hensyn til markedsrisiko, mens Oljefondet ser best ut med IR. Om dette har hatt noe å si for hvilke tall som er presentert i rapporten er selvsagt umulig å vite.

Departementet har tidligere hevdet at det finnes mange forskjellige mål på hvor godt en forvalter har gjort det, og at IR er et mål så godt som noe annet. Tabellen viser imidlertid at det er feil på to måter. For det første, dersom vi holder oss til de mest brukte og anerkjente prestasjonsmålene er det ikke mange å velge mellom. Faktisk er flere av målene statistisk sett identiske, så det er egentlig bare tre forskjellige mål. For det andre så er IR ganske opplagt det minst egnede, ettersom det ignorerer markedsrisiko.

Det bør nevnes at oljefondets tall for aksjeforvaltningen er litt bedre enn for fondet som helhet. Meravkastningen for aksjer er såpass høy at det er vanskeligere å forklare den som resultat av ren tilfeldighet, selv etter at vi korrigerer for markedsrisiko. På samme måte som at en forvalter kan blåse opp resultatet i gode tider ved å ta markedsrisiko, er det imidlertid også mulig å satse på andre kjente risikopremier. Mye tyder på at ekstraavkastningen innen aksjer er et resultat av dette.

Det er likevel ikke åpenbart at fondet bør høste slike risikopremier. Passiv forvaltning er i en utmerket forsikringsordning mot ekstreme hendelser. Alle investorer berøres av kriser, men gjennomsnittsinvestoren har vist seg å komme ganske godt ut av dem alle.



Sannsynlighet for at meravkastning skyldes ren tilfeldighet: 

Prestasjonsmål                                    Folketrygdfondet      Oljefondet 
Alfa, Appraisal Ratio og Treynor            0,6 %                         44,0 %
Sharpe, M2                                                    0,7 %                         40,7 %
IR (tar ikke hensyn til markedsrisiko)   5,8 %                         9,9 %

P-verdi. Geometrisk gjennomsnitt før kostnader. Kilde: NBIM/egne beregninger
Se beregningene for Folketrygdfondet og for Oljefondet

• Alfa, Appraisal Ratio og Treynor innebærer en justering for markedsrisiko og er statistisk sett identiske mål 
 •Sharpe og M2 innebærer en justering for total risiko og er også statistisk sett identisk. Markedsrisiko utgjør en stor del av den totale risikoen, så sannsynlighetene er ganske like for de to første målene.
 •I henhold til vanlig vitenskapelig metode, regnes ikke resultatet som sikkert (signifikant) dersom det er mer enn 5 % sannsynlighet for at det skyldes tilfeldigheter.

Finding bugs in SQLite, the easy way

SQLite is probably the most popular embedded database in use today; it is also known for being exceptionally well-tested and robust. In contrast to traditional SQL solutions, it does not rely on the usual network-based client-server architecture and does not employ a complex ACL model; this simplicity makes it comparatively safe.


At the same time, because of its versatility, SQLite sometimes finds use as the mechanism behind SQL-style query APIs that are exposed between privileged execution contexts and less-trusted code. For an example, look no further than the WebDB / WebSQL mechanism available in some browsers; in this setting, any vulnerabilities in the SQLite parser can open up the platform to attacks.



With this in mind, I decided to take SQLite for a spin with - you guessed it - afl-fuzz. As discussed some time ago, languages such as SQL tend to be difficult to stress-test in a fully automated manner: without an intricate model of the underlying grammar, random mutations are unlikely to generate anything but trivially broken statements. That said, afl-fuzz can usually leverage the injected instrumentation to sort out the grammar on its own. All I needed to get it started is a basic dictionary; for that, I took about 5 minutes to extract a list of reserved keywords from the SQLite docs (now included with the fuzzer as testcases/_extras/sql/). Next, I seeded the fuzzer with a single test case:




create table t1(one smallint);
insert into t1 values(1);
select * from t1;



This approach netted a decent number of interesting finds, some of which were mentioned in an earlier blog post that first introduced the dictionary feature. But when looking at the upstream fixes for the initial batch, I had a sudden moment of clarity and recalled that the developers of SQLite maintained a remarkably well-structured and comprehensive suite of hand-written test cases in their repository.



I figured that this body of working SQL statements may be a much better foundation for the fuzzer to build on, compared to my naive query - so I grepped the test cases out, split them into files, culled the resulting corpus with afl-cmin, and trimmed the inputs with afl-tmin. After a short while, I had around 550 files, averaging around 220 bytes each. I used them as a starting point for another run of afl-fuzz.



This configuration very quickly yielded a fair number of additional, unique fault conditions, ranging from NULL pointer dereferences, to memory fenceposts visible only under ASAN or Valgrind, to pretty straightforward uses of uninitialized pointers (link), bogus calls to free() (link), heap buffer overflows (link), and even stack-based ones (link). The resulting collection of 22 crashing test cases is included with the fuzzer in docs/vuln_samples/sqlite_*. They include some fairly ornate minimized inputs, say:



CREATE VIRTUAL TABLE t0 USING fts4(x,order=DESC);

INSERT INTO t0(docid,x)VALUES(-1E0,'0(o');

INSERT INTO t0 VALUES('');

INSERT INTO t0 VALUES('');

INSeRT INTO t0 VALUES('o');

SELECT docid FROM t0 WHERE t0 MATCH'"0*o"';



All in all, it's a pretty good return on investment for about 30 minutes of actual work - especially for a piece of software functionally tested and previously fuzzed to such a significant extent.



PS. I was truly impressed with Richard Hipp fixing each and every of these cases within a couple of hours of sending in a report. The fixes have been incorporated in version 3.8.9 of SQLite and have been public for a while, but there was no upstream advisory; depending on your use case, you may want to update soon.

In defense of modern macro theory

The first economist
The 2008 financial crisis was a traumatic event. Like all social trauma, it invoked a variety of emotional responses, including the natural (if unbecoming) human desire to find someone or something to blame. Some of the blame has been directed at segments of the economic profession. It is the nature of some of these criticisms that I'd like to talk about today.

One of the first questions macroeconomists get asked is: How could you possibly not have predicted the crisis? We all remember when the Queen of England asked this (supposedly embarrassing) question. Put on the spot, I might have replied that the same question could have been asked of her ancestor(*) predecessor King Charles I, whose death in 1649 also came about under rather unexpected circumstances. Or, I might have replied that many economists did in fact predict this crisis...along with the many other crises that failed to materialize (recall the old joke about the economist who successfully predicted 10 out of the past 2 recessions).

But seriously, the delivery of precise time-dated forecasts of events is a mug's game. If this is your goal, then you probably can't beat theory-free statistical forecasting techniques. But this is not what economics is about. The goal, instead, is to develop theories that can be used to organize our thinking about various aspects of the way an economy functions. Most of these theories are "partial" in nature, designed to address a specific set of phenomena (there is no "grand unifying theory" so many theories coexist). These theories can also be used to make conditional forecasts: IF a set of circumstances hold, THEN a number of events are likely to follow. The models based on these theories can be used as laboratories to test and measure the effect, and desirability, of alternative hypothetical policy interventions (something not possible with purely statistical forecasting models).

There is a sense in which making predictions is very easy. Here's one for you: Mt. Vesuvius will experience another major eruption on the scale of AD 79, when it buried the city of Pompeii, tragically killing thousands of people (among them, the famous naturalist Pliny the Elder). While volcanologists are getting progressively better at predicting eruptions, it remains very difficult to forecast their size. So when an event like this arrives, it always comes as a bit of a shock. In any case, like I said, making predictions (unconditional forecasts) that will eventually come true is easy. There are thousands of people predicting that the world will end in 2015, 2016, 2016, etc. Some of these prognosticators will one day be proven correct. Those making predictions that fail to come true hide in the shadows for a while, but then re-emerge bolder than ever. I don't blame these soothsayers: there seems to be an insatiable demand for the likes of Nostradamus, and this is clearly a case of demand creating its own supply. In this spirit then, permit me to deliver my own forecast (remember, you heard it here first): there will be another major financial crisis on the scale experienced in 2008.

While we can't predict when the next major crisis will occur (I hope the Queen can forgive us), it is reasonable to expect experts to make good conditional forecasts. IF Vesuvius blows, THEN a lot of people are going to die. This type of conditional forecast should lead policymakers to think of ways in which the potential death toll can be avoided or reduced. Perhaps citizens should be prohibited from inhabiting dangerous areas. At the very least, an emergency evacuation procedure should be put in place. The same is true for financial crises. Perhaps restrictions should be placed on the exchange of some types of financial products. At the very least, an emergency response strategy should be put in place. Actually, there is an emergency response strategy--the Fed's emergency lending facility--which essentially worked according to plan in 2008-09. Now, maybe you don't like various aspects of the Fed's liquidity facility and that's fine (even if it did make a healthy profit for the U.S. taxpayer). But you can't say that economists hadn't predicted the possible need for such a facility. Indeed, the Fed was set up on the premise that financial crises would continue to afflict modern economies (by the way, financial crises were a common part of the economic landscape well before the founding of the Fed in 1913, so think carefully before you accuse the Fed as the source of market instability).

Alright, so much for blaming economists and their less-than-crystal balls (hmm, a part of me says I should have edited that last sentence.) What else? Well, I notice a lot of blame also being heaped on modern macroeconomic theory and the professors of such theory. "What's Wrong With Macro?" the headlines wail (roll eyes here). Things have become so bad that we now see students telling professors how macro should be taught. Next we'll have teenagers telling their parents how to raise children. Well, we already have that of course. But the point is that while parents patiently hear out these protestations (having been young for much longer than the youth in question), they do not generally capitulate to them. I'm sorry, but you're only 16, I love you, and no, you can't have the keys to the car!

And yet, amazingly, we have to read things like this (source):
Wendy Carlin, professor of economics at University College London, who is directing a project at the Institute for New Economics Thinking, a think-tank set up by billionaire financier George Soros, said at a conference last year that students had become “disenchanted” and lecturers “embarrassed” by the way economics is taught.
Lecturers at UCL are "embarrassed" by the way economics is taught? What does this mean? Are they embarrassed about the way they personally teach their economics classes? Then they should be fired for incompetence. Are they embarrassed by the current state of macroeconomic theory? Then they should be fired and sent back to grad school (or the Russian front, if you're a Hogan's Heroes fan).

The dynamic general equilibrium (DGE) approach is the dominant methodology in macro today. I think this is so because of its power to organize thinking in a logically consistent manner, its ability to generate reasonable conditional forecasts, as well as its great flexibility--a property that permits economists of all political persuasions to make use of the apparatus.

For the uninitiated, let me describe in words what the DGE approach entails. First, it provides an explicit description of what motivates and constrains individual actors. This property of the model reflects a belief that individuals are incentivized--in particular, they are likely to respond in more or less predictable ways to changes in the economic environment to protect or further their interests. Second, it provides an explicit description of government policy. While this latter property sounds straightforward, it is in fact a rather delicate and important exercise. To begin, in a dynamic model, a "policy" does not correspond to a given action at a point in time. Rather, it corresponds to a full specification of (possibly state-contingent) actions over time. Moreover, there is no logical way in which to separate (say) "monetary" policy from "fiscal" policy. The policies of different government agencies are inextricably linked through a consolidated government budget constraint (see A Dirty Little Secret).  Thus, any statement concerning (say) the conduct of monetary policy must explicitly (or implicitly) contain a statement stipulating a consistent fiscal policy. The exercise is delicate in the sense that model predictions can depend sensitively on the exact details of how policies are designed and how they interact with each other. The exercise is important because the aforementioned sensitivity is quite likely present in real-world policy environments. Finally, the DGE approach insists that the policies adopted by private and public sector actors are in some sense "consistent" with each other. Notions of consistency are imposed through the use of solution concepts, like competitive equilibrium, Nash equilibrium, search and bargaining equilibrium, etc. Among other things, consistency requires that economic outcomes respect resource feasibility and budget constraints.

Now, what part of the above manifesto do you not like? The idea that people respond to incentives? Fine, go ahead and toss that assumption away. What do you replace it with? People behave like robots? Fine, go ahead and build your theory. What else? Are you going to argue against having to describe the exact nature of government policy? Do you want to do away with consistency requirements, like the respect for resource feasibility. Sure, go ahead. Maybe your theory explains some things a lot better than mine when you dispense with resource constraints. But do you really want to hang your hat on that interpretation of the world? An internally inconsistent theory that happens to be consistent with some properties of the data is not what I would call deep understanding. (Nor is an internally consistent theory inconsistent with the data something to be happy about, but that's the purpose of continued research.)

The point I want to make here is not that the DGE approach is the only way to go. I am not saying this at all. In fact, I personally believe in the coexistence of many different methodologies. The science of economics is not settled, after all. The point I am trying to make is that the DGE approach is not insensible (despite the claims of many critics who, I think, are sometimes driven by non-scientific concerns).

I should make clear too that by "the DGE approach," I do not limit the phrase to New Keynesian DGSE models or RBC models. The approach is much more general. While one might legitimately observe that these latter sets of models largely downplay the role of financial frictions and that practioners should therefore not have relied so heavily on them, it would not be correct to say that DGE theory cannot account for financial crises. If you don't believe me, go read this (free) book by Franklin Allen and Douglas Gale: Understanding Financial Crises. While this book was published in 2007, it reflects a lifetime of work on the part of the authors. And if you take a look at the references, you'll discover a large and lively literature on financial crises well before 2007. In my view, this constitutes evidence that "mainstream" economists were thinking about episodes like 2008-09. If central bank economists were not paying too much attention to that branch of the literature, it is at most an indictment on them and not on the body of tools that were available to address the questions that needed to be answered. (In any case, as I mentioned above, I think the Fed did act according to the way theory generally suggests during the crisis).

Once again (lest I be misunderstood, which I'm afraid seems unavoidable these days) I am not claiming that DGE is the be-all and end-all of macroeconomic theory. There is still a lot we do not know and I think it would be a good thing to draw on the insights offered by alternative approaches. I do not, however, buy into the accusation that there "too much math" in modern theory. Math is just a language. Most people do not understand this language and so they have a natural distrust of arguments written in it. Different languages can be used and abused. But this goes as much, if not more, for the vernacular as it does for specialized languages. Complaining that there is "too much math" in a particular theoretical exposition is like complaining that there is too much Hiragana in a haiku poem. Before criticizing, either learn the language or appeal to reliable translations (in the case of haiku poetry, you would not want to rely solely on translations hostile to Japanese culture...would you?).

As for the teaching of macroeconomics, if the crisis has led more professors to pay more attention to financial market frictions, then this is a welcome development. I also fall in the camp that stresses the desirability of teaching more economic history and placing greater emphasis on matching theory with data. However, it's often very hard, if not impossible, to fit everything into a one-semester course. Invariably, a professor must pick and choose. But while a particular course is necessarily limited in what can be presented, the constraint binds less tightly for a program as a whole. Thus, one could reasonably expect a curriculum to be modified to include more history, history of thought, heterodox approaches, etc. But this is a far cry from calling for the abandonment of DGE theory. Do not blame the tools for how they were (or were not) used.


(*) My colleague Doug Allen points out that Elizabeth II did not descend from Charles I. The Stuart line dies out with Queen Anne, at which point George I is brought over from Germany. EII is a member of the house of Hanover/Windsor, and not a Stuart. Many also think that EII is a descendant of Elizabeth I, but of course she had no children, and ended the Tudor line. 

Offentlige eierskap gir høy risiko

Professor Finn R. Rørsund argumenterer i DN 30. mars for offentlig eierskap av kraftselskapene.

Rørsund kan kanskje ha rett i at ikke alle argumenter for privatisering er like gode, men han har glemt det viktigste. Mange kommuner tar en helt unødvendig risiko ved å plassere alle “sparepengene” i samme bedrift. Å flytte investeringene til en spredt aksjeportefølje vil eliminere denne «selskapsspesifikke» risikoen helt, uten at det går utover avkastningen.

Dette er mer enn abstrakte teoretiske betraktninger. Slik risiko har kostet Troms Fylkeskommune og Tromsø kommune 100 millioner i tapt utbytte hvert år. Hadde Troms Kraft vært privatisert for 15 år siden ville kommunene fortsatt hatt tilsvarende inntekt i dag. Selvsagt kan også privateide bedrifter gå dårlig, men da går det ikke utover kommunenes tilbud til innbyggerne.

Lovens forbud mot private majoritetseiere medfører en rabatt slik at kommunene låses inne i usalgbare kraftselskaper. Loven tvinger dermed kommuner til å legge alle pengene i samme kurv. Det hadde for så vidt vært greit dersom det fantes en kjempegod grunn til det, men den er det vanskelig å få øye på. Selv førsteårsstudenter forstår at markedet priser kraftaksjer akkurat som alle andre aksjer, og sikkerhetshensyn kan i ekstremtilfeller tas vare på ved ekspropriasjon..

Det er imidlertid en forutsetning at selve salget skjer på en profesjonell måte slik at markedspris oppnås. Det var ikke alltid tilfelle før forbudet mot salg til private kom.