One of the most publicized efforts along these lines is the concept of browser-side, reflected XSS detectors: the two most notable implementations are David Ross' XSS filter (shipping in Internet Explorer 8) and Adam Barth's XSS Auditor (WebKit browsers - currently in Safari 5). The idea behind these tools is very simple: if query parameters seen in the request look suspiciously close to any "active" portions of the rendered page, the browser should assume foul play - and step in to protect the user.
Naturally, nothing is that simple in the browser world. The major design obstacle is that the check has to be passive - active probes are likely to cause persistent side effects on the server. Because of this, the detector can merely look for correlation, but not confirm causation; try this or this search in Internet Explorer 8 to see a canonical example of why this distinction matters.
Since passive checks inevitably cause false positives, the authors of these implementations defaulted to a non-disruptive "soft fail" mode: when a suspicious pattern is detected, the browser will still attempt to render the page - just with the naughty bits selectively removed or defanged in some way.
While a fair amount of issues with XSS detectors were pointed in the past - from hairy implementation flaws to trivial bypass scenarios - the "soft fail" design creates some more deeply rooted problems that may affect the safety of the web in the long run. Perhaps the most striking example is this snippet, taken from a real-world website:
<script>
...
// Safari puts <style> tags into new <head> elements, so
// let's account for this here.
...
</script>
...
[ sanitized attacker-controlled data goes here ]
The data displayed there is properly escaped; under normal circumstances, this page is perfectly safe. But now, consider what happens if the attacker-controlled string is
} body { color: expression(alert(1)) }
- and ?q=<script>
is appended by at the end of the URL. The filter employed in MSIE8 will neutralize the initial <script>
, resulting in the subsequent <style>
tag being interpreted literally - putting the browser in a special parsing mode. This, in turn, causes the somewhat perverted CSS parser to skip any leading and trailing text, and interpret the attacker-controlled string as a JavaScript expression buried in a stylesheet.
Eep. This particular case should be fixed by the June security update, but the principle seems dicey.
The risk of snafus like this aside, the fundamental problem with XSS detectors is that quite simply, client-side JavaScript is increasingly depended upon to implement security-critical features. The merits of doing so may be contested by purists, but it's where the web is headed. No real alternatives exist, too: a growing number of applications uses servers merely as a dumb, ACL-enforcing storage backend, with everything else implemented on client side; mechanisms such as localStorage and widget manifests actually remove the server component altogether.
To this client-heavy architecture, XSS detectors pose an inherent threat: they make it possible for third-party attackers to selectively tamper with the execution of the client-side code, and cause the application to end up in an unexpected, inconsistent state. Disabling clickjacking defenses is the most prosaic example; but more profound attacks against critical initialization or control routines are certainly possible, and I believe they will happen in the future.
The skeptic in me is not entirely convinced that XSS filters will ever be robust and reliable enough to offset the risks they create - but I might be wrong; time will tell. Until this is settled, I and several other people pleaded for an opt-in strict filtering mode that prevents the page from being rendered at all when a suspicious pattern is detected. Now that it is available, enabling it on your site is probably a good idea.
0 nhận xét:
Đăng nhận xét