Quite a few thick volumes have been written on the topic of securing corporate environments - but most of them boil down to the following advice:
- Reduce your attack surface by eliminating non-essential services and sensibly restricting access to data,
- Compartmentalize important services to lower the impact of a compromise,
- Keep track of all assets and remediate known vulnerabilities in a timely manner,
- Teach people to write secure code and behave responsibly,
- Audit these processes regularly to make sure they actually work.
We have an array of practical methodologies and robust tools to achieve these goals - but we also have a pretty good understanding of where this model falls apart. As epitomized by Charlie Miller's goofy catchphrase,
"I was not in your threat model", the reason for this is two-fold:
- You will likely get owned, by kids: reasonably clued people with some time on their hands are (and for the foreseeable future will be) able to put together a fuzzer and find horrible security flaws in most of the common server or desktop software in a matter of days. Modern, large-scale enterprises with vast IT infrastructure, complex usability needs, and a diverse internal user base, are always extremely vulnerable to this class of attackers.
As a feel-good measure, this discussion is often framed in terms of high-profile vulnerability trade, international crime syndicates, or government-sponsored cyberwarfare - but chances are, the harbinger of doom will be a bored teenager, or a geek with an outlandish agenda; they are less predictable than foreign governments, too - so in some ways, we should be fearing them more.
- Compartmentalization will not save you: determined attackers will take their time, and will get creative if needs be. Compartmentalization may buy a couple of days, but simply can't be designed to keep them away forever, yet keep the business thriving: as witnessed by a number of well-publicized security incidents, design compromises and poor user judgment inevitably create escalation paths.
Past a certain point, proactive measures begin to offer diminishing returns: throwing money at the problem will probably never get you to a point where a compromise is unlikely, and the business can go on. This is not a cheering prospect - but something we have to live with.
The key to surviving a compromise may lie in the capability to detect a successful attack very early on. The attackers you should be fearing the most are just humans, and have to learn about the intricacies of your networks, and the value of every asset, as they go. These precious hours may give you the opportunity to recover - right before an incident becomes a disaster.
This brings us to the topic of intrusion detection - a surprisingly hard and hairy challenge in the world of information security. Most of the detection techniques at our disposal today are
inherently bypassable; this is particularly true for bulk of the tricks employed by most of the commercial AV, IDS, IPS, and WAF systems I know of. And that's where the problem lies: because the internals of these tools are essentially public knowledge, off-the-shelf intrusion detection systems often amount to a fairly expensive (and often by itself vulnerable!) tool to deter only the dumbest of attackers. A competent adversary, prepared in advance or simply catching the scent of a specific IDS toolkit, is reasonably likely to work around it without breaking a sweat.
The interesting - and highly contentious - question is what happens when the design of your in-house intrusion detection system becomes a secret. Many of my peers would argue this is actually harmful: in most contexts, security-by-obscurity does nothing to correct the underlying problems, and merely sweeps them under the rug. Yet, I am inclined to argue that in this particular case, it offers a qualitative difference. Here's why:
Let's begin by proposing a single, trivial anomaly detection rule, custom-tailored for our operating environment (and therefore, reasonably sensitive and unlikely to generate false positives); for example, it could be a simple daemon to take notice of execve()
calls with stdin
pointing directly to a network socket - a common sign of server-targeted shellcode. When the architecture is not shared with common commercial tools, external attackers stand a certain chance of tripping this check, and a certain chance of evading it - but this is governed almost solely by having dumb luck, and not by their skill. The odds are not particularly reassuring, but are a starting point.
(Now, an insider stands a better chance of defeating the mechanism - an unavoidable if less common problem - but a rogue IT employee is an issue that, for all intents and purposes, defies all attempts to solve it with technology alone.)
Let's continue further down this road: perhaps also introduce a
simple tool to identify unexpected interactive sessions within encrypted and non-encrypted network traffic; or even a tweaked version of /bin/sh
that alerts us to unusual -c
or stdin
payloads. Building on top of this, we can proceed to business logic: say, checks for database queries for unusual patterns, or coming from workstations belonging to users not usually engaged in customer support. Each of these checks is trivial, and stands only an average chance of detecting a clued attacker. Yet, as the chain of tools grows longer, and the number of variables that needs to be guessed perfectly right increases, the likelihood of evading detection - especially early in the process - becomes extremely low. Simplifying a bit, the odds of strolling past ten completely independent, 50% reliable checks, are just 1 in 1024; it does not matter whether the attacker is the best hacker in the world or not (unless also a clairvoyant).
For better or worse, intrusion detection seems to be an essential survival skill - and I think we are all too often doing it wrong. A successful approach on the uniqueness and diversity - and not necessarily the complexity - of the tools used; the moment you neatly package them and share the product with the world, your IDS becomes a $250,000 novelty toy.
Sadly, large organizations often lack the expertise, or just the courage, to get creative. There is a stigma of low expectations attached to intrusion detection in general, to security-by-obscurity as a defense strategy, and to maintaining in-house code that can't generate pie charts on a quarterly basis.
But when you are a high-profile target, defending only against the dumb attackers in a world full of brilliant ones - some of them driven by peculiar and unpredictable incentives - strikes me as a poor approach in the long run.