The controversy will continue, and it would be pointless to recapture it here. Having said that, I have an issue with one of the common assumptions made in this debate: the belief that vulnerabilities are unlikely to be discovered by multiple parties at once, and therefore, the original finder of a flaw is in a unique position to control the information. Intuitively, it sounds pretty reasonable: security research is hard, and the necessary skills are nearly impossible to formalize or imitate. The press, in particular, likes to think of vulnerability finding as an arcane form of art. But is it so?
Over the years, I have probably found over 200 vulnerabilities in high-profile client- and server-side apps. I think it is a pretty good data set to work with - and curiously, I am strongly convinced that none of these findings should be attributed to my unique skill. It feels that a vast majority of these findings were just a matter of the security community reaching a certain critical body of knowledge - gaining a better understanding of what can go wrong, where to look for it, and how to automate the testing with simple fuzzers and similar validation frameworks. At that point, finding bugs is simply a matter of picking a target to go after; who happens to be behind the wheel is largely immaterial.
What's more, I found that when you go after a sufficiently buggy and complex application, most of the problems you find would turn out to be dupes of what other researchers discovered weeks or months earlier. This pattern proved to be particularly prevalent in the browser world, where I had multiple bug collisions with Georgi Guninski or Amit Klein.
I suspect the same can be said by a vast majority of other security researchers - though not all of them are willing to make the same self-deprecating admission in public. Sadly, by enjoying being portrayed as wizards, we are also making it easier for vendors to advocate the view that the discovery of a vulnerability is what creates a threat - and that researchers have an obligation to wait indefinitely to help protect users against attacks.
While giving a responsive vendor some advance notification is often a good idea, creating a social pressure on researchers to wait for patches removes any incentive for vendors to respond in a timely manner. This would not be a problem if vendors were consistently awesome - but they certainly aren't today. We are commonly seeing some of the leading proponents of responsible disclosure taking from six months to two years to address even fairly simple, high-risk bugs - and seldom facing any criticism for this. Researchers who behave "irresponsibly", on the other hand, are routinely called names if they are lucky; and are formally or informally threatened if not.
Vulnerability disclosure, however done, does not make you less secure. More often that we are willing to admit, it merely brings out an existing risk from the thriving underground market and into the spotlight. Naturally, this can be disruptive in the short run, which is why the practice is controversial; it's certainly easier not to have to scramble to fix an issue on a short notice. That said, timely and verbose disclosure also levels the playing field by keeping vendors accountable, and giving all users the information needed to limit exposure - even if by stopping to use a particular service until a fix is available.
0 nhận xét:
Đăng nhận xét