Vulnerability researchers - the people behind the data points used - are usually fairly skeptical of such efforts; but their criticisms revolve primarily around the need to factor in bug severity, or the potential for cherry-picking the data to support a particular claim. These flaws are avoidable in a well-designed study. Are we good, then?
Well, not necessarily so. The most important problem is that today, for quite a few software projects, the majority of vulnerabilities is discovered through in-house testing - and the attitudes of vendors when it comes to discussing these findings publicly tend to vary. This has a completely devastating impact on the value of the analyzed data: vulnerability counting severely penalizes forthcoming players, benefits the more secretive ones, and places the ones who do not do any proactive work somewhere in between.
Consider this example from the browser world: in recent years, the folks over at Microsoft started doing a lot of in-house fuzzing, and have undoubtedly uncovered hundreds of security flaws in Internet Explorer and elsewhere. It appears to be their preference not to routinely discuss these problems, however - often silently targeting fixes for service packs or other cummulative updates instead. In fact, here's an anecdote: I reported a bunch of exploitable crashes to them in September 2009, only to see them fixed them without attribution in December that year. The underlying flaws were apparently discovered independently during internal cleanups. So be it: as long as bugs get fixed, we all benefit, and Microsoft is definitely working hard in this area.
Contrast this approach with Mozilla, another vendor doing a lot of successful in-house security testing (in part thanks to the amazing work of Jesse Ruderman). They are pretty forthcoming about their results, and announce internal, fuzzing-related fixes almost every month. Probably to avoid shooting themselves in the foot in vulnerability count tallies, they tend to report them cummulatively as crashes with evidence of memory corruption, however - and usually assign them a single CVE number to this every month. Again, sounds good.
Lastly, have a look at Chromium; several folks are fuzzing the hell out of this browser, too - but the project opts to track these issues individually, partly because the need to coordinate with WebKit developers - and each one of them ends up with a separate CVE entry. The result? Release notes often look like
this.
All these approaches have their merits - but how do you reconcile them for the purpose of vulnerability counting? And, is it fair to compare any of the above players with vendors who do not seem to be doing any proactive security work at all?
Well, perhaps the browser world is special; one could argue that at least some products with matching security practices must exist - and these cases should be directly comparable. Maybe, but the other problem is the quality of the databases themselves: recent changes to the vulnerability handling process, including the emergence of partial- or non-disclosure, the popularity of vulnerability trading, and the demise of centralized vulnerability discussion channels, all make it prohibitively difficult for database maintainers to reliably track issues through their lifetime. Common problems include:
- The inability to fully understand what the problem actually is, and what severity it needs to be given. Database maintainers cannot be expected to be intimately familiar with every product, and need to process thousands of entries every year - but this often leads to vulnerability notes that may at first sight appear inaccurate, hard to verify, or very likely not worth classifying as security flaws at all.
- The difficulty discovering how the disclosure process looked like, and how long the vendor needed to develop a patch. This is perhaps the most important metric to examine when trying to understand the performance of a vendor - yet one that is not captured, or captured very selectively and inconsistently, in most of the databases I am aware of.
- The difficulty detecting the moment when a particular flaw is addressed - all the databases contain a considerable number of entries that
were
not
updated
to
reflect
patch
status (apologies for Chrome-specific examples). There seems to be correlation between the prevalence of this problem and the mode in which vendor responses are made available to the general public. Furthermore, when a problem is not fixed in a timely manner, the maintainers of the database generally do not reach out to the vendor to investigate why: is the researcher's claim contested, or is the vendor simply sloppy? This very important distinction is lost.
Comparable problems apply to most other security-themed studies that draw far-fetching conclusions from simple numerical analysis of proprietary data. Pie charts don't immediately invalidate a whitepaper, but blind reliance on these figures warrants a closer investigation of the claims.
0 nhận xét:
Đăng nhận xét