Let's begin with a relatively uncontroversial observation: we simply don't know how to build decent web application scanners today - be it in the commercial or the open source world. How bad is the issue? Consider that the following topics essentially remain the Millenium Prize Problems in this field:
- Deciding whether two HTTP responses seen at two different URLs functionally correspond to the same component. Many unrelated pages look very much alike, because of the heavy use of headers, footers, and other templates. Conversely, a single page may change appearance every time it is requested if the content is generated dynamically. Pretty much every single fuzzy matching algorithm used here has spectacular and common failure modes.
- Deciding what constitutes "page not found" and "error occurred" responses for various parts of the website, a prerequisite for a crawl. This task gets particularly interesting when various URLs are mapped to several different back-end frameworks, a common scenario in enterprise settings; in some of these cases, HTTP response codes are not passed through to the end user, too.
- Figuring out how the application encodes query parameters and interprets URL paths. Surprisingly, even the most respected and largest enterprise vendors - say, SAP or Oracle - often treat RFCs as a friendly suggestion at best, and invent their own, outlandish conventions for no apparent reason. Throw
mod_rewrite
into the mix, and things get seriously hairy. - Telling whether a particular security attack probe succeeded at all. Consider testing for a buffer overflow: both a successful attack on a vulnerable code, and tripping a security exception due to a well-implemented parameter length check, may result in an identical HTTP 500 response. What next?
The list goes on. In the end, automated scanners very often fare poorly when it comes to finding security bugs: they consistently fail to reach or properly recognize all the low-hanging fruit, spew out significant amounts of false positives - and just as often, simply catch fire when attempting to crawl a site in the first place.
This does not mean that web application scanners are worthless; but their primary value lies in equipping a skilled, human pentester with the initial view of the structure of a site; and automating certain cumbersome tasks, such as the execution of brute-force attacks. The reality is harsh: without the right person behind the wheel, the output of such a scanner, no matter how well tabulated and color-coded, tells you next to nothing about how secure your infrastructure is.
Regrettably, skilled pentesters with in-depth vulnerability hunting expertise, and excellent insight into how the web actually works, are exceptionally hard to get; heck, they are even difficult to recognize: there are no meaningful certifications, no particularly useful professional bodies... and even an impressive employment history or a history of conference presentations is a hit-and-miss indicator.
As a result, many commercial entities end up without the low-level security expertise needed to truly benefit from running a web security scanner - and in absence of this, they unrealistically expect the tool to give them some sort of a turn-key insight into the unknown. This never works as expected; and with nobody equipped to reliably evaluate the quality of the crawl, or the accuracy and ingenuity of the probes used, there is not even a proper feedback loop.
The problems the customers have here reflect negatively on the vendors, too: the company eventually accumulates a baggage of institutional customers who do not exert any pressure on improving the products to - let's say - always have cutting-edge SQL injection checks; and heavily focus on more easily verifiable, but peripheral functionality, instead: plug-and-play support for all sorts of internal SSO systems, elaborate reporting capabilities, compliance-related views that can be shown as a proof of due diligence, and so forth. All these features have some value, of course - but ultimately, they divert resources from the one pursuit that matters the most: helping a skilled pentester, and getting better at it.
In this sense, the commercial vulnerability scanner market is, by large, driven by pathological incentives - and will probably remain this way, at least until enough skilled professionals enter the workforce, or - less likely - until a major technological breakthrough is made.
Now, here's an interesting tidbit: have a look at the open-source vulnerability scanner world. Almost all the web application scanning tools that cropped up in the past two years are written by active bug hunters who use these tools in their day to day work, and were unhappy with what commercial tools have to offer. Not all of these alternatives are good, and not all will succeed - but they are developed with a very important goal in mind: to get really good at spotting bugs, and nothing more. Many of them already surpass some of the more stagnant commercial offerings in this department - and the improvement will continue.
I am very reluctant to make sweeping statements about the superiority of one software development model over another - but in this case, it seems to make a real difference, at least today. And, sure, I recognize that this argument goes both ways: the moment those open source developers start deriving most of their income from selling their products, rather than from consulting services, many of the tools will (as one person put it) quickly begin to suck.
0 nhận xét:
Đăng nhận xét