afl-fuzz: nobody expects CDATA sections in XML

I made a very explicit, pragmatic design decision with afl-fuzz: for performance and reliability reasons, I did not want to get into static analysis or symbolic execution to understand what the program is actually doing with the data we are feeding to it. The basic algorithm for the fuzzer can be just summed up as randomly mutating the input files, and gently nudging the process toward new state transitions discovered in the targeted binary. That discovery part is done with the help of lightweight and extremely simple instrumentation injected by the compiler.



I had a working theory that this would make the fuzzer a bit smarter than a potato, but I wasn't expecting any fireworks. So, when the algorithm managed to not only find some useful real-world bugs, but to successfully synthesize a JPEG file out of nothing, I was genuinely surprised by the outcome.



Of course, while it was an interesting result, it wasn't an impossible one. In the end, the fuzzer simply managed to wiggle its way through a long and winding sequence of conditionals that operated on individual bytes, making them well-suited for the guided brute-force approach. What seemed perfectly clear, though, is that the algorithm wouldn't be able to get past "atomic", large-search-space checks such as:



if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;



...or:



if (header.magic_value == 0x12345678) goto terminate_now;



This constraint made the tool less useful for properly exploring extremely verbose, human-readable formats such as HTML or JavaScript.



Some doubts started to set in when afl-fuzz effortlessly pulled out four-byte magic values and synthesized ELF files when testing programs such as objdump or file. As I later found out, this particular example is often used as a benchmark for complex static analysis or symbolic execution frameworks.

But still, guessing four bytes could have been just a happy accident. With fast targets, the fuzzer can pull off billions of execs per day on a single machine, so it could have been dumb luck.



(As an aside: to deal with strings, I had this very speculative idea of special-casing memory comparison functions such as strcmp() and memcmp() by replacing them with non-optimized versions that can be instrumented easily. I have one simple demo of that principle bundled with the fuzzer in experimental/instrumented_cmp/, but I never got around to actually implementing it in the fuzzer itself.)



Anyway, nothing quite prepared me for what the recent versions were capable of doing with libxml2. I seeded the session with:



<a b="c">d</a>



...and simply used that as the input for a vanilla copy of xmllint. I was merely hoping to stress-test the very basic aspects of the parser, without getting into any higher-order features of the language. Yet, after two days on a single machine, I found this buried in test case #4641 in the output directory:



...<![<CDATA[C%Ada b="c":]]]>...



What the heck?!



As most of you probably know, CDATA is a special, differently parsed section within XML, separated from everything else by fairly complex syntax - a nine-character sequence of bytes that can't be realistically discovered by just randomly flipping bits.



The finding is actually not magic; there are two possible explanations:





  • As a recent "well, it's cheap, so let's see what happens" optimization, AFL automatically sets -O3 -funroll-loops when calling the compiler for instrumented binaries, and some of the shorter fixed-string comparisons will be actually just expanded inline. For example, if the stars align just right, strcmp(buf, "foo") may be unrolled to:


    cmpb $0x66,0x200c32(%rip) # 'f'
    jne 4004b6
    cmpb $0x6f,0x200c2a(%rip) # 'o'
    jne 4004b6
    cmpb $0x6f,0x200c22(%rip) # 'o'
    jne 4004b6
    cmpb $0x0,0x200c1a(%rip) # NUL
    jne 4004b6


    ...which, by the virtue of having a series of explicit and distinct branch points, can be readily instrumented on a per-character basis by afl-fuzz.



  • If that fails, it just so happens that some of the string comparisons in libxml2 in parser.c are done using a bunch of macros that will compile to similarly-structured code (as spotted by Ben Hawkes). This is presumably done so that the compiler can optimize this into a tree-style parser - whereas a linear sequence of strcmp() calls would lead to repeated and unnecessary comparisons of the already-examined chars.


    (Although done by hand in this particular case, the pattern is fairly common for automatically generated parsers of all sorts.)


The progression of test cases seems to support both of these possibilities:




<![

<![C b="c">

<![CDb m="c">

<![CDAĹĹ@

<![CDAT<!

...




I find this result a bit spooky because it's an example of the fuzzer defiantly and secretly working around one of its intentional and explicit design limitations - and definitely not something I was aiming for =)



Of course, treat this first and foremost as a novelty; there are many other circumstances where similar types of highly verbose text-based syntax would not be discoverable to afl-fuzz - or where, even if the syntax could be discovered through some special-cased shims, it would be a waste of CPU time to do it with afl-fuzz, rather than a simple syntax-aware, template-based tool.


(Coming up with an API to make template-based generators pluggable into AFL may be a good plan.)



By the way, here are some other gems from the randomly generated test cases:



<!DOCTY.

<?xml version="2.666666666666666666667666666">

<?xml standalone?>

Bitcoiners: Surely we can do Buiter than this?

Willem Buiter has a very nice piece critiquing the Swiss Gold Initiative; see here.

Unfortunately, Buiter starts talking about Bitcoin, making false analogies between the cryptocurrency and gold. He should have just focused on gold.

As it turns out, both gold and Bitcoin do share some important characteristics. I've written about this here: Why Gold and Bitcoin Make Lousy Money.

The false analogy is in equating the mining of gold with the mining of bitcoin. Paul Krugman made the same mistake here: Adam Smith Hates Bitcoin. Here is the offending passage in Buiter's notes:
John Maynard Keynes once described the Gold Standard as a “barbarous relic”. From a social perspective, gold held by central banks as part of their foreign exchange reserves merits the same label, in our view. The same holds for gold held idle in private vaults as a store of value. The cost and waste involved in getting the gold out of the ground only to but it back under ground in secure vaults is considerable. Mining the ore is environmentally damaging, especially if it involves open pit mining. Refining the gold causes further environmental risks. Historically, gold was extracted from its ores by using mercury, a toxic heavy metal, much of which was released into the atmosphere. Today, cyanide is used instead. While cyanide, another toxic substance, is broken down in the environment, cyanide spills (which occur regularly) can wipe out life in the affected bodies of water. Runoff from the mine or tailing piles can occur long after mining has ceased. 
Even though, from a social efficiency perspective, the mining of new gold and the costly storage of existing gold for investment purposes are wasteful activities, they may be individually rational. The same applies to Bitcoin. Its mining is socially wasteful and environmentally damaging.
No, no, no and no. This analogy is all wrong.

Let me be clear about this. Bitcoin costs zero to produce. If one had control over the protocol, one could instantly and costlessly create as many bitcoins as one wanted. No environmental waste, no effort needed. The same is not true of gold.

But wait a minute, you might say. Doesn't mining for bitcoins require effort, consume resources, etc.? The answer is, yes, it does. But this fact does not make the analogy correct (though one can certainly understand why the analogy seems to be correct). Let me explain.

The purpose of gold miners is to prospect for gold. The purpose of Bitcoin miners is not to prospect for bitcoins. The purpose of Bitcoin miners is to process payment requests. A bank teller also processes payment requests. To say that miners are mining for bitcoin is like saying that tellers are mining for dollars. Understand? Let me try again.

Gold miners prospect for gold. But they do not necessarily get paid in gold. In fact, if they work for gold companies, they are likely to get paid in dollars. But they could get paid in gold, or anything else, for that matter. How they get paid does not take away their basic function, which is to discover new gold.

Bitcoin miners, like bank tellers, process payments. Miners, like tellers, want to get paid for the service they provide. It really does not matter how they are paid. As it turns out, miners are paid in the form of newly-issued bitcoins (as well as old bitcoins offered as service fees by transactors). But this does not mean that they are "mining for bitcoin" any more than a bank teller is "mining for dollars."

But isn't mining for bitcoin "wasteful?" In a sense, yes, but again, the "waste" here is not the same as the waste associated with commodity money. Again, let me explain.

We live in a "second-best" world, where people lie and cheat. In a first-best world, money would not even be necessary (see my post here: Evil is the Root of All Money). It is unfortunate that we need Bitcoin miners (and tellers) to process payments. But the resources consumed in this process are necessary, given the safeguards that have to enforced to ensure the integrity of the payment system.

The waste associated with mining gold is that in principle, gold money can be replaced by paper money (and please, do not give some weird "out of thin air" argument; see here.) Paper money, like Bitcoin, and unlike gold, is (near) costless to produce.

Note: Of course, the limit on the supply of bitcoin is determined by a community consensus on following the protocol that adopts the 21M limit. Bitcoin advocates argue that this "hardwired" protocol that governs the supply of bitcoin is more reliable and less prone to political manipulation relative to existing central banking systems. This all may be true, but does not take away from my argument above concerning the false analogy between gold and bitcoin.


afl-fuzz: crash exploration mode

One of the most labor-intensive portions of any fuzzing project is the work needed to determine if a particular crash poses a security risk. A small minority of all fault conditions will have obvious implications; for example, attempts to write or jump to addresses that clearly come from the input file do not need any debate. But most crashes are more ambiguous: some of the most common issues are NULL pointer dereferences and reads from oddball locations outside the mapped address space. Perhaps they are a manifestation of an underlying vulnerability; or perhaps they are just harmless non-security bugs. Even if you prefer to err on the side of caution and treat them the same, the vendor may not share your view.



If you have to make the call, sifting through such crashes may require spending hours in front of a debugger - or, more likely, rejecting a good chunk of them based on not much more than a hunch. To help triage the findings in a more meaningful way, I decided to add a pretty unique and nifty feature to afl-fuzz: the brand new crash exploration mode, enabled via -C.



The idea is very simple: you take a crashing test case and give it to afl-fuzz as a starting point for the automated run. The fuzzer then uses its usual feedback mechanisms and genetic algorithms to see how far it can get within the instrumented codebase while still keeping the program in the crashing state. Mutations that stop the crash from happening are thrown away; so are the ones that do not alter the execution path in any appreciable way. The occasional mutation that makes the crash happen in a subtly different way will be kept and used to seed subsequent fuzzing rounds later on.



The beauty of this mode is that it very quickly produces a small corpus of related but somewhat different crashes that can be effortlessly compared to pretty accurately estimate the degree of control you have over the faulting address, or to figure out whether you can get past the initial out-of-bounds read by nudging it just the right way (and if the answer is yes, you probably get to see what happens next). It won't necessarily beat thorough code analysis, but it's still pretty cool: it lets you make a far more educated guess without having to put in any work.



As an admittedly trivial example, let's take a suspect but ambiguous crash in unrtf, found by afl-fuzz in its normal mode:



unrtf[7942]: segfault at 450 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]


When fed to the crash explorer, the fuzzer took just several minutes to notice that by changing {\cb-44901990 in the converted RTF file to printable representations of other negative integers, it could quickly trigger faults at arbitrary addresses of its choice, corresponding mostly-linearly to the integer set:




unrtf[28809]: segfault at 88077782 ip 0805062b sp bff00210 error 4 in unrtf[8048000+1c000]

unrtf[26656]: segfault at 7271250 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]



Given a bit more time, it would also almost certainly notice that choosing values within the mapped address space get it past the crashing location and permit even more fun. So, automatic exploit writing next?

Swift UIScrollView with Auto-Layout and Size Classes

UIScrollView with Auto-Layout and Size Classes

A simple tutorial on making scrollable view that can accommodate any screen sizes. Download Link.




1.) First, of course we need a scrollview. So, drag a scrollview inside the storyboard and make sure that its distance from every side is zero.


2.) To make sure that it will accommodate any screen sizes; we have to pin all of its side to its superview.  Click the newly added scrollview (just in case you click somewhere) and go to "Editor" -> "Pin" -> "Leading Space to SuperView". Repeat the same process for trailing, top and bottom.

3.) Now we need a container to display our UI's. So, drag a simple view and make sure that all of its distance to every side it zero, similar to scrollview.

4.) To make sure that it will accommodate any screen sizes the same process (2) must be done.

5.) Since we just need to be able to scroll up and down, we need to fix its width. Select the newly added view and and "control-drag" it to scroll view, and then select "Equal Widths".

6.) For the height, select the view and "control-drag" it to itself, then select "Height". You will be given a height of 580. You can change its value by clicking the height constraint then going to attribute inspector and then changing the "constant" to any value you want.

 And that is it. You can now build and run the project to see how it works.

Japan: Some Perspective

So Japan is in recession.  And it's all so unexpected. Ring the alarm bells!

Well, hold on for a moment. Take a look at the following diagram, which tracks the Japanese real GDP per capita since 1995 (normalized to equal 100 in that year). I also decompose the GDP into its expenditure components: private consumption, government consumption, private investment, and government investment (I ignore net exports). The GDP numbers go up to the 3rd quarter, the other series go up to only the 2nd quarter.



In terms of what we should have expected, I think it's fair to say that most economists would have predicted the qualitative nature of the observed dynamic in response to an anticipated tax hike. That is, we'd expect people to substitute economic activity intertemporally--front loading activity ahead of the tax hike, then curtailing it just after. And qualitatively, that's exactly what we see in the graph above. But does the drop off in real per capita GDP really deserve all the attention it's getting? I don't think so. The fact that the economy was a little weaker in the 3rd quarter than expected (the two consecutive quarters of GDP contraction is what justified labeling the event a "recession") is not really something to justify wringing one's hand over. Not yet, at least.

By the way, if you're interested in reading more about the Koizumi boom era, see my earlier post here: Another look at the Koizumi boom.

Roger Farmer on labor market clearing.

While I'm a huge fan of Roger Farmer's work, I think he gets this one a little wrong:  Repeat After Me: The Quantity of Labor Demanded is Not Always Equal to the Quantity Supplied. I am, however, sympathetic to the substantive part of his message. Let me explain.

The idea of "supply" and "demand" is rooted in Marshall's scissors (a partial equilibrium concept). The supply and demand framework is an extremely useful and powerful way of organizing our thinking on a great many matters. And it is easy to understand. (I have a pet theory that if you really want to have an idea take hold, you have to be able to represent it in the form of a cross. The Marshallian cross. The Keynesian cross. Maybe even the Christian cross.)

The Marshallian perspective is one in which commodities are traded on impersonal markets--anonymous agents trading corn and human labor alike in sequences of spot trades. Everything that you would ever need to buy or sell is available (absence intervention) at a market-clearing price. The idea that you may want to seek out and form long-lasting relationships with potential trading partners (and that such relationships are difficult to form) plays no role in the exchange process--an abstraction that is evidently useful in some cases, but not in others.

I think what Roger means to say is that (repeat after me) the abstraction of anonymity, when describing the exchange for labor services, is a bad one. And on this, I would wholeheartedly agree (I've discussed some of these issues in an earlier post here).

Once one takes seriously the notion of relationship formation, as is done in the labor market search literature, then the whole concept of "supply and demand" analysis goes out the window. That's because these well-defined supply and demand schedules do not exist in decentralized search environments. Wage rates are determined through bargaining protocols, not S = D. To say, as Roger does, that demand does not always equal supply, presupposes the existence of Marshall's scissors in the first place (or,  more generally, of a complete set of Arrow-Debreu markets).

And in any case, how can we know whether labor markets do not "clear?" The existence of unemployment? I don't think so. The neoclassical model is one in which all trade occurs in centralized locations. In the context of the labor market, workers are assumed to know the location of their best job opportunity. In particular, there is no need to search (the defining characteristic of unemployment according to standard labor force surveys). The model is very good at explaining the employment and non-employment decision, or how many hours to work and leisure over a given time frame. The model is not designed to explain search. Hence it is not designed to explain unemployment. (There is even a sense in which the neoclassical model can explain "involuntary" employment and non-employment. What is "involuntary" are the parameters that describe an individuals' skill, aptitude, etc. Given a set of unfortunate attributes, a person may (reluctantly) choose to work or not. Think of the working poor, or those who are compelled to exit the labor market because of an illness.)

Having said this, there is nothing inherent in the neoclassical model which says that labor market outcomes are always ideal. A defining characteristic of Rogers' work has been the existence of multiple equilibria. It is quite possible for competitive labor markets to settle on sub-optimal outcomes where all markets clear. See Roger's paper here, for example.

The notion that supply might not equal demand may not have anything to do with understanding macroeconomic phenomena like unemployment. I think this important to understand because if we phrase things the way Roger does, people accustomed to thinking of the world through the lens of Marshall's scissors are automatically going to look for ways in which the price mechanism fails (sticky wages, for example). And then, once the only plausible inefficiency is so (wrongly) identified, the policy implication follows immediately: the government needs to tax/subsidize/control wage rates. In fact, the correct policy action may take a very different form (e.g., skills retraining programs, transportation subsidies, job finding centers, etc.)

Kunsten å velge tall som passer

Det finnes ikke grunnlag for å hevde at Statlige selskaper vil gi bedre avkastning enn oljefondet. Av ukjent grunn hevder likevel arbeiderbevegelsen stadig vekk det motsatte.

Siste ute er andre nestleder i LO Hans-Christian Gabrielsen i et innlegg i Klassekampen. Gabrielsen har lett etter eksempler på et selskap som i en eller annen periode har slått indeksen, og jammen har han ikke funnet det også!

Kirsebæret som Gabrielsen har plukket er Kongsberg Gruppen i perioden 2006-2011. I denne spesielle perioden gav selskapet en avkastning på hele 26 %!

Problemet er at det finnes andre eksempler som viser det motsatte. Norsk Hydro gav tap i denne perioden. DNB gav nesten ikke noen avkastning i det hele tatt.

Dersom vi leter vil det være veldig enkelt å finne selskaper i oljefondets portefølje som har gjort det bra. Siden 1998 har Apple gitt en årlig avkastning på 33 %. Ikke nok med det, velger vi perioden 11. mars 2009 til 17. oktober 2012 så får vi den årlige avkastningen opp i hele 78 %!

Kan vi da konkludere med at investeringer i oljefondet gir mye bedre avkastning enn statlige selskap? Jeg vil påstå at svaret er nei. Dette er bare én av mange investeringer i oljefondet. Jeg lurer på hva Gabrielsen mener.

En diskusjon der debattantene slår hverandre i hodet med tall som er møysommelig valgt for å passe egen argumentasjon er egentlig ganske uinteressant. Statlige selskaper ser ut til å gjøre det omtrent like bra eller dårlig som andre selskaper. Avkastning er ikke et argument for statlig eierskap.

Uriktige påstander fra Arbeiderpartiet og LO om meravkastningen for statlige selskaper har begynt å bli hyppig. Det er synd fordi det går utover troverdigheten til arbeiderbevegelsen, som ellers er ganske etterrettelig og ansvarlig. Tidligere har Arbeiderpartiets Else-May Botten, Marianne Martinsen og Jonas Gahr Støre plukket sine favoritteksempler. Det blir spennende å se hvem som blir nestemann ut.

Exploitation modelling matters more than we think

Our own Krzysztof Kotowicz put together a pretty neat site called the Bughunter University. The first part of the site deals with some of the most common non-qualifying issues that are reported to our Vulnerability Reward Program. The entries range from mildly humorous to ones that still attract some debate; it's a pretty good read, even if just for the funny bits.



Just as interestingly, the second part of the site also touches on topics that go well beyond the world of web vulnerability rewards. One page in particular deals with the process of thinking through, and then succinctly and carefully describing, the hypothetical scenario surrounding the exploitation of the bugs we find - especially if the bugs are major, novel, or interesting in any other way.



This process is often shunned as unnecessary; more often than not, I see this discussion missing, or done in a perfunctory way, in conference presentations, research papers, or even the reports produced as the output of commercial penetration tests. That's unfortunate: we tend to be more fallible than we think we are. The seemingly redundant exercise in attack modelling forces us to employ a degree of intellectual rigor that often helps spot fatal leaps in our thought process and correct them early on.



Perhaps the most common fallacy of this sort is the construction of security attacks that fully depend on the exposure to pre-existing risks of a magnitude that is comparable or greater than the danger posed by the new attack. Familiar examples of this trend may include:



  • Attacks on account data that can be performed only if the attacker already has shell-level access to said account. Some of research in this category deals with the ability to extract HTTP cookies by examining process memory or disk, or to backdoor the browser by placing a DLL in a directory not accessible to other UIDs. Other publications may focus on exploiting buffer overflows in non-privileged programs through a route that is unlikely to ever be exposed to the outside world.



  • Attacks that require physical access to brick or otherwise disable a commodity computing device. After all, in almost all cases, having the attacker bring a hammer or wire cutters will work just as well.



  • Web application security issues that are exploitable only against users who are using badly outdated browsers or plugins. Sure, the attack may work - but so will dozens of remote code execution and SOP bypass flaws that the client software is already known to be vulnerable to.



  • New, specific types of attacks that work only against victims who already exhibit behaviors well-understood to carry unreasonable risk - say, the willingness to retype account credentials without looking at the address bar, or to accept and execute unsolicited downloads.



  • Sleight-of-hand vectors that assume, without explaining why, that the attacker can obtain or tamper with some types of secrets (e.g., capability-bearing URLs), but not others (e.g., user's cookies, passwords, server's private SSL keys), despite their apparent similarity.





Some theorists argue that security issues exist independently of exploitation vectors, and that they must be remedied regardless of whether one can envision a probable attack vector. Perhaps this distinction is useful in some contexts - but it is still our responsibility to precisely and unambiguously differentiate between immediate hazards and more abstract thought experiments of that latter kind.

How to get git build id using maven

There are times when git's build number is important for a release. Specially in development mode, when there are frequent releases. So if we want to append the build number on our page, how do we automate it?

For us to achieve this we will need 2 maven plugins: org.codehaus.mojo:buildnumber-maven-plugin and com.google.code.maven-replacer-plugin:replacer.

And this is how to define these plugins in your war project.

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>buildnumber-maven-plugin</artifactId>
<version>1.3</version>
<executions>
<execution>
<phase>validate</phase>
<goals>
<goal>create</goal>
</goals>
</execution>
</executions>
<configuration>
<doCheck>false</doCheck>
<doUpdate>false</doUpdate>
<shortRevisionLength>41</shortRevisionLength>
</configuration>
</plugin>
This plugin, produces buildNumber variable, which can be an id, date or a git build no. We will use the last.

With the following assumptions:
1.) The xhtml pages that we want to process for replacement are inside template folder.
2.) We will replace 2 strings: VERSION_NUMBER and BUILD_NUMBER

<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<phase>prepare-package</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<includes>
<include>${project.build.directory}/**/layout/*.xhtml</include>
</includes>
<replacements>
<replacement>
<token>@VERSION_NUMBER@</token>
<value>${project.version}</value>
</replacement>
<replacement>
<token>@BUILD_NUMBER@</token>
<value>[${buildNumber}]</value>
</replacement>
</replacements>
</configuration>
</plugin>

A dirty little secret


Shhh...I told you *nothing!* 
There's been a lot of talk lately about the so-called "Neo-Fisherite" proposition that higher nominal interest rates beget higher inflation rates (and vice-versa for lower nominal interest rates). I thought I'd weigh in here with my own 2 cents worth on the controversy.

Let's start with something that most people find uncontroversial, the Fisher equation:

[FE]  R(t) = r(t) + Π (t+1)

where R is the gross nominal interest rate, r is the gross real interest rate, an Π is the expected gross inflation rate (all variables logged).

I like to think of the Fisher equation as a no-abitrage condition, where r represents the real rate of return on (say) a Treasury Inflation Protected Security (TIPS) and (R - Π) represents the expected real rate of return on a nominal Treasury. If the two securities share similar risk and liquidity characteristics, then we'd expected the Fisher equation to hold. If it did not hold, a nimble bond trader would be able to make riskless profits. Nobody believes that such opportunities exist for any measurable length of time.

Let me assume that the real interest rate is fixed (the gist of the argument holds even if we relax this assumption). In this case, the Fisher equation tells us that higher nominal interest rates must be associated with higher inflation expectations (and ultimately, higher inflation, if expectations are rational). But association is not the same thing as causation. And the root of the controversy seems to lie in the causal assumptions embedded in the Neo-Fisherite view.

The conventional (Monetarist) view is that (for a "stable" demand for real money balances), an increase in the money growth rate leads to an increase in inflation expectations, which leads bond holders to demand a higher nominal interest rate as compensation for the inflation tax. The unconventional (Neo-Fisherite) view is that lowering the nominal interest leads to...well, it leads to...a lower inflation rate...because that's what the Fisher equation tells us. Hmm, no kidding?
 
The lack of a good explanation for the economics underlying the causal link between R and Π is what leads commentators like Nick Rowe to tear at his beard. But the lack of clarity on this dimension by a some writers does not mean that a good explanation cannot be found. And indeed, I think Nick gets it just about right here. The reconciliation I seek is based on what Eric Leeper has labeled a dirty little secret; namely, that "for monetary policy to successfully control inflation, fiscal policy must behave in a particular, circumscribed manner." (Pg. 14. Leeper goes on to note that both Milton Friedman and James Tobin were explicit about this necessity.)

The starting point for answering the question of how a policy affects the economy is to be very clear what one means by policy. Most people do not get this very important point: a policy is not just an action, it is a set of rules. And because monetary and fiscal policy are tied together through a consolidated government budget constraint, a monetary policy is not completely specified without a corresponding (and consistent) fiscal policy.

When Monetarists claim that increasing the rate of money growth leads to inflation, they assert that this will be so regardless of how the fiscal authority behaves. Implicitly, the fiscal authority is assumed to (passively) follow a set of rules: i.e., use the new money to cut taxes (via helicopter drops), finance government spending, or pay interest on money. It really doesn't matter which. (For some push back on this view, see Price Stability: Is a Tough Central Banker Enough? by Lawrence Christiano and Terry Fitzgerald.)

When Neo-Fisherites claim that increasing the nominal interest rate leads to inflation, the fiscal authority is also implicitly assumed to follow a specific set of rules that passively adjust to be consistent with the central bank's policy. At the end of the day, the fiscal authority must increase the rate of growth of its nominal debt (for a strictly positive nominal interest rate and a constant money-to-bond ratio, the supply of money must be rising at this same rate.) At the same time, this higher rate of debt-issue is used to finance a higher primary budget deficit (just think helicopter drops again).

Well, putting things this way makes it seem like there's no substantive difference between the two views. Personally, I think this is more-or-less correct, and I believe that Nick Rowe might agree with me. I hestitate a bit, however, because there may be some hard-core "Neo-Wicksellians" out there that try to understand the interest rate - inflation dynamic without any reference to fiscal policy and nominal aggregates. (Not sure if this paper falls in this class, but I plan to read it soon and comment on it: The Perils of Nominal Targets, by Roc Armenter).

If the view I expressed above is correct, then it suggests that just limiting attention to (say) the dynamics of the Fed's balance sheet is not very informative without reference to the perceived stance of fiscal policy and how it interacts with monetary policy. Macroeconomists have of course known this for a long time but have, for various reasons, downplayed the interplay for stretches of time (e.g., during the Great Moderation). Maybe it's time to be explicit again. Let's help Nick keep his beard.
 

Pulling JPEGs out of thin air

This is an interesting demonstration of the capabilities of afl; I was actually pretty surprised that it worked!



$ mkdir in_dir
$ echo 'hello' >in_dir/hello
$ ./afl-fuzz -i in_dir -o out_dir ./jpeg-9a/djpeg



In essence, I created a text file containing just "hello" and asked the fuzzer to keep feeding it to a program that expects a JPEG image (djpeg is a simple utility bundled with the ubiquitous IJG jpeg image library; libjpeg-turbo should also work). Of course, my input file does not resemble a valid picture, so it gets immediately rejected by the utility:



$ ./djpeg '../out_dir/queue/id:000000,orig:hello'
Not a JPEG file: starts with 0x68 0x65



Such a fuzzing run would be normally completely pointless: there is essentially no chance that a "hello" could be ever turned into a valid JPEG by a traditional, format-agnostic fuzzer, since the probability that dozens of random tweaks would align just right is astronomically low.



Luckily, afl-fuzz can leverage lightweight assembly-level instrumentation to its advantage - and within a millisecond or so, it notices that although setting the first byte to 0xff does not change the externally observable output, it triggers a slightly different internal code path in the tested app. Equipped with this information, it decides to use that test case as a seed for future fuzzing rounds:



$ ./djpeg '../out_dir/queue/id:000001,src:000000,op:int8,pos:0,val:-1,+cov'
Not a JPEG file: starts with 0xff 0x65



When later working with that second-generation test case, the fuzzer almost immediately notices that setting the second byte to 0xd8 does something even more interesting:



$ ./djpeg '../out_dir/queue/id:000004,src:000001,op:havoc,rep:16,+cov'
Premature end of JPEG file
JPEG datastream contains no image



At this point, the fuzzer managed to synthesize the valid file header - and actually realized its significance. Using this output as the seed for the next round of fuzzing, it quickly starts getting deeper and deeper into the woods. Within several hundred generations and several hundred million execve() calls, it figures out more and more of the essential control structures that make a valid JPEG file - SOFs, Huffman tables, quantization tables, SOS markers, and so on:

$ ./djpeg '../out_dir/queue/id:000008,src:000004,op:havoc,rep:2,+cov'
Invalid JPEG file structure: two SOI markers
...
$ ./djpeg '../out_dir/queue/id:001005,src:000262+000979,op:splice,rep:2'
Quantization table 0x0e was not defined
...
$ ./djpeg '../out_dir/queue/id:001282,src:001005+001270,op:splice,rep:2,+cov' >.tmp; ls -l .tmp
-rw-r--r-- 1 lcamtuf lcamtuf 7069 Nov 7 09:29 .tmp



The first image, hit after about six hours on an 8-core system, looks very unassuming: it's a blank grayscale image, 3 pixels wide and 784 pixels tall. But the moment it is discovered, the fuzzer starts using the image as a seed - rapidly producing a wide array of more interesting pics for every new execution path:






Of course, synthesizing a complete image out of thin air is an extreme example, and not necessarily a very practical one. But more prosaically, fuzzers are meant to stress-test every feature of the targeted program. With instrumented, generational fuzzing, lesser-known features (e.g., progressive, black-and-white, or arithmetic-coded JPEGs) can be discovered and locked onto without requiring a giant, high-quality corpus of diverse test cases to seed the fuzzer with.



The cool part of the libjpeg demo is that it works without any special preparation: there is nothing special about the "hello" string, the fuzzer knows nothing about image parsing, and is not designed or fine-tuned to work with this particular library. There aren't even any command-line knobs to turn. You can throw afl-fuzz at many other types of parsers with similar results: with bash, it will write valid scripts; with giflib, it will make GIFs; with fileutils, it will create and flag ELF files, Atari 68xxx executables, x86 boot sectors, and UTF-8 with BOM. In almost all cases, the performance impact of instrumentation is minimal, too.



Of course, not all is roses; at its core, afl-fuzz is still a brute-force tool. This makes it simple, fast, and robust, but also means that certain types of atomically executed checks with a large search space may pose an insurmountable obstacle to the fuzzer; a good example of this may be:



if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;



In practical terms, this means that afl-fuzz won't have as much luck "inventing" PNG files or non-trivial HTML documents from scratch - and will need a starting point better than just "hello". To consistently deal with code constructs similar to the one shown above, a general-purpose fuzzer would need to understand the operation of the targeted binary on a wholly different level. There is some progress on this in the academia, but frameworks that can pull this off across diverse and complex codebases in a quick, easy, and reliable way are probably still years away.



PS. Several folks asked me about symbolic execution and other inspirations for afl-fuzz; I put together some notes in this doc.

How to run automate undeploy, redeployment in jboss using jenkins

Deploy on the same server where jenkins is deployed.
JBOSS_HOME/bin/jboss-cli.sh -c --user="czetsuya" --password="broodcamp.com" --commands="undeploy broodcamp.war,deploy $WORKSPACE/broodcamp/target/broodcamp.war"

Deploy on a different server.
JBOSS_HOME/bin/jboss-cli.sh controller=127.0.0.3 -c --user="czetsuya" --password="broodcamp.com" --commands="undeploy broodcamp.war,deploy $WORKSPACE/broodcamp/target/broodcamp.war"

REST Testing with Arquillian in JBoss

This article will explain how we can automate REST web service testing using Arquillian and JBoss web server.

First, you must create a javaee6 war (non-blank) project from jboss-javaee6 archetype. This should create a project with Member model, service, repository, controller and web service resource. The archetype will also generate a test case for the controller with test data source and persistence file.

In case you don't have the archetype, I'm writing the most important classes: Model
package com.broodcamp.jboss_javaee6_war.model;

import java.io.Serializable;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Table;
import javax.persistence.UniqueConstraint;
import javax.validation.constraints.Digits;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Pattern;
import javax.validation.constraints.Size;
import javax.xml.bind.annotation.XmlRootElement;

import org.hibernate.validator.constraints.Email;
import org.hibernate.validator.constraints.NotEmpty;

@SuppressWarnings("serial")
@Entity
@XmlRootElement
@Table(uniqueConstraints = @UniqueConstraint(columnNames = "email"))
public class Member implements Serializable {

@Id
@GeneratedValue
private Long id;

@NotNull
@Size(min = 1, max = 25)
@Pattern(regexp = "[^0-9]*", message = "Must not contain numbers")
private String name;

@NotNull
@NotEmpty
@Email
private String email;

@NotNull
@Size(min = 10, max = 12)
@Digits(fraction = 0, integer = 12)
@Column(name = "phone_number")
private String phoneNumber;

public Long getId() {
return id;
}

public void setId(Long id) {
this.id = id;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

public String getEmail() {
return email;
}

public void setEmail(String email) {
this.email = email;
}

public String getPhoneNumber() {
return phoneNumber;
}

public void setPhoneNumber(String phoneNumber) {
this.phoneNumber = phoneNumber;
}
}
Web service resource:
package com.broodcamp.jboss_javaee6_war.rest;

import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.logging.Logger;

import javax.enterprise.context.RequestScoped;
import javax.inject.Inject;
import javax.persistence.NoResultException;
import javax.validation.ConstraintViolation;
import javax.validation.ConstraintViolationException;
import javax.validation.ValidationException;
import javax.validation.Validator;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

import com.broodcamp.jboss_javaee6_war.data.MemberRepository;
import com.broodcamp.jboss_javaee6_war.model.Member;
import com.broodcamp.jboss_javaee6_war.service.MemberRegistration;

/**
* JAX-RS Example
* <p/>
* This class produces a RESTful service to read/write the contents of the
* members table.
*/
@RequestScoped
public class MemberResourceRESTService implements IMemberResourceRESTService {

@Inject
private Logger log;

@Inject
private Validator validator;

@Inject
private MemberRepository repository;

@Inject
MemberRegistration registration;

@GET
@Produces(MediaType.APPLICATION_JSON)
public List<Member> listAllMembers() {
return repository.findAllOrderedByName();
}

@GET
@Path("/{id:[0-9][0-9]*}")
@Produces(MediaType.APPLICATION_JSON)
public Member lookupMemberById(@PathParam("id") long id) {
Member member = repository.findById(id);
if (member == null) {
throw new WebApplicationException(Response.Status.NOT_FOUND);
}
return member;
}

/**
* Creates a new member from the values provided. Performs validation, and
* will return a JAX-RS response with either 200 ok, or with a map of
* fields, and related errors.
*/
@Override
public Response createMember(Member member) {

Response.ResponseBuilder builder = null;

try {
// Validates member using bean validation
validateMember(member);

registration.register(member);

// Create an "ok" response
builder = Response.ok();
} catch (ConstraintViolationException ce) {
// Handle bean validation issues
builder = createViolationResponse(ce.getConstraintViolations());
} catch (ValidationException e) {
// Handle the unique constrain violation
Map<String, String> responseObj = new HashMap<String, String>();
responseObj.put("email", "Email taken");
builder = Response.status(Response.Status.CONFLICT).entity(
responseObj);
} catch (Exception e) {
// Handle generic exceptions
Map<String, String> responseObj = new HashMap<String, String>();
responseObj.put("error", e.getMessage());
builder = Response.status(Response.Status.BAD_REQUEST).entity(
responseObj);
}

return builder.build();
}

/**
* <p>
* Validates the given Member variable and throws validation exceptions
* based on the type of error. If the error is standard bean validation
* errors then it will throw a ConstraintValidationException with the set of
* the constraints violated.
* </p>
* <p>
* If the error is caused because an existing member with the same email is
* registered it throws a regular validation exception so that it can be
* interpreted separately.
* </p>
*
* @param member
* Member to be validated
* @throws ConstraintViolationException
* If Bean Validation errors exist
* @throws ValidationException
* If member with the same email already exists
*/
private void validateMember(Member member)
throws ConstraintViolationException, ValidationException {
// Create a bean validator and check for issues.
Set<ConstraintViolation<Member>> violations = validator
.validate(member);

if (!violations.isEmpty()) {
throw new ConstraintViolationException(
new HashSet<ConstraintViolation<?>>(violations));
}

// Check the uniqueness of the email address
if (emailAlreadyExists(member.getEmail())) {
throw new ValidationException("Unique Email Violation");
}
}

/**
* Creates a JAX-RS "Bad Request" response including a map of all violation
* fields, and their message. This can then be used by clients to show
* violations.
*
* @param violations
* A set of violations that needs to be reported
* @return JAX-RS response containing all violations
*/
private Response.ResponseBuilder createViolationResponse(
Set<ConstraintViolation<?>> violations) {
log.fine("Validation completed. violations found: " + violations.size());

Map<String, String> responseObj = new HashMap<String, String>();

for (ConstraintViolation<?> violation : violations) {
responseObj.put(violation.getPropertyPath().toString(),
violation.getMessage());
}

return Response.status(Response.Status.BAD_REQUEST).entity(responseObj);
}

/**
* Checks if a member with the same email address is already registered.
* This is the only way to easily capture the
* "@UniqueConstraint(columnNames = "email")" constraint from the Member
* class.
*
* @param email
* The email to check
* @return True if the email already exists, and false otherwise
*/
public boolean emailAlreadyExists(String email) {
Member member = null;
try {
member = repository.findByEmail(email);
} catch (NoResultException e) {
// ignore
}
return member != null;
}
}
Member resource interface to be use in arquillian testing:
package com.broodcamp.jboss_javaee6_war.rest;

import javax.ws.rs.Consumes;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

import com.broodcamp.jboss_javaee6_war.model.Member;

@Consumes(MediaType.TEXT_PLAIN)
@Produces(MediaType.TEXT_PLAIN)
@Path("/members")
public interface IMemberResourceRESTService {

@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Path("/")
public Response createMember(Member member);

}

And finally the test class that creates the archive and deploy it on jboss container:
package com.broodcamp.jboss_javaee6_war.test;

import java.net.URL;

import javax.inject.Inject;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

import org.apache.http.HttpStatus;
import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.arquillian.extension.rest.client.ArquillianResteasyResource;
import org.jboss.arquillian.junit.Arquillian;
import org.jboss.arquillian.test.api.ArquillianResource;
import org.jboss.shrinkwrap.api.Archive;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.asset.EmptyAsset;
import org.jboss.shrinkwrap.api.spec.WebArchive;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.broodcamp.jboss_javaee6_war.model.Member;
import com.broodcamp.jboss_javaee6_war.rest.IMemberResourceRESTService;
import com.broodcamp.jboss_javaee6_war.rest.JaxRsActivator;
import com.broodcamp.jboss_javaee6_war.service.MemberRegistration;

@RunWith(Arquillian.class)
public class MemberResourceRESTServiceTest {

private Logger log = LoggerFactory
.getLogger(MemberResourceRESTServiceTest.class);

@ArquillianResource
private URL deploymentURL;

@Deployment(testable = false)
public static Archive createTestArchive() {
return ShrinkWrap
.create(WebArchive.class, "rest.war")
.addClass(JaxRsActivator.class)
.addPackages(true, "com/broodcamp/jboss_javaee6_war")
.addAsResource("META-INF/test-persistence.xml",
"META-INF/persistence.xml")
.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
// Deploy our test datasource
.addAsWebInfResource("test-ds.xml");
}

@Inject
MemberRegistration memberRegistration;

@Test
public void testCreateMember(
@ArquillianResteasyResource IMemberResourceRESTService memberResourceRESTService)
throws Exception {
Member newMember = new Member();
newMember.setName("czetsuya");
newMember.setEmail("czetsuya@gmail.com");
newMember.setPhoneNumber("1234567890");

Response response = memberResourceRESTService.createMember(newMember);

log.info("Response=" + response.getStatus());

Assert.assertEquals(response.getStatus(), HttpStatus.SC_OK);
}

}

To run that we need the following dependencies:
For arquillian testing:
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.jboss.arquillian.junit</groupId>
<artifactId>arquillian-junit-container</artifactId>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.jboss.arquillian.protocol</groupId>
<artifactId>arquillian-protocol-servlet</artifactId>
<scope>test</scope>
</dependency>
For rest testing:
<dependency>
<groupId>org.jboss.arquillian.extension</groupId>
<artifactId>arquillian-rest-client-impl-3x</artifactId>
<version>1.0.0.Alpha3</version>
</dependency>

<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jackson-provider</artifactId>
<version>${version.resteasy}</version>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.jboss.arquillian.extension</groupId>
<artifactId>arquillian-rest-client-impl-jersey</artifactId>
<version>1.0.0.Alpha3</version>
<scope>test</scope>
</dependency>

To run the test we need to create a maven profile as follows:
<profile>
<!-- Run with: mvn clean test -Parq-jbossas-managed -->
<id>arq-jbossas-managed</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<dependencies>
<dependency>
<groupId>org.jboss.as</groupId>
<artifactId>jboss-as-arquillian-container-managed</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</profile>

arquillian.xml to be save inside the src/test/resources folder.

<?xml version="1.0" encoding="UTF-8"?>
<!-- JBoss, Home of Professional Open Source Copyright 2013, Red Hat, Inc.
and/or its affiliates, and individual contributors by the @authors tag. See
the copyright.txt in the distribution for a full listing of individual contributors.
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy
of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required
by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License. -->
<arquillian xmlns="http://jboss.org/schema/arquillian"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://jboss.org/schema/arquillian
http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

<!-- Uncomment to have test archives exported to the file system for inspection -->
<!-- <engine> -->
<!-- <property name="deploymentExportPath">target/</property> -->
<!-- </engine> -->

<!-- Force the use of the Servlet 3.0 protocol with all containers, as it
is the most mature -->
<defaultProtocol type="Servlet 3.0" />

<!-- Example configuration for a remote WildFly instance -->
<container qualifier="jboss" default="true">
<!-- By default, arquillian will use the JBOSS_HOME environment variable.
Alternatively, the configuration below can be uncommented. -->
<configuration>
<property name="jbossHome">C:\java\jboss\jboss-eap-6.2</property>
<!-- <property name="javaVmArguments">-Xmx512m -XX:MaxPermSize=128m -->
<!-- -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=y -->
<!-- </property> -->
</configuration>
</container>

<engine>
<property name="deploymentExportPath">target/deployments</property>
</engine>

</arquillian>
And invoke maven as: mvn clean test -Parq-jbossas-managed