Originally published on SensePost's blog.
While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).
The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:
- Vulnerabilities in financial systems. The whole audit hierarchy was created around financial controls, and so sticks close to financial systems when venturing into IT's space. Detailed and complex collusion possibilities will be discussed when approving payments, but the fact that you can reset anyone's password at the helpdesk is sometimes missed, and more advanced attacks like token hijacking are often ignored.
- Audit house priorities. Audit houses get driven just like anyone else. While I wasn't around for Enron, the reverberations could still be felt years later when I worked at one. What's more, audit houses are increasingly finding revenue coming from consulting gigs and need to keep their smart people happy. This leads to external audit selling "add-ons" like identity management audits (sometimes, they're even incentivised to).
- Auditor skills. The auditor you get could be an amazing business process auditor but useless when it comes to infosec, but next year it could be the other way around. It's equally possibly with internal audit. Thus, the strengths of the auditor will determine where you get nailed the hardest.
- The Rotation plan. This year system X, next year system Y. It doesn't mean system X has gotten better, just that they moved on. If you spend your year responding to the audit on system Y and ignore X, you'll miss vital stuff.
- Known systems. External and internal auditors don't know IT's business in detail. There could be all sorts of critical systems (or pivot points) that are ignored because they weren't in the "flow of financial information" spread sheet.
But security vendors prioritisation of controls are driven by:
- New Problems. Security products that work eventually get deployed everywhere they're going to be deployed. They continue to bring in income, but the vendor needs a new bright shiny thing they can take to their existing market and sell. Thus, new problems become new scary things that they can use to push product. Think of the Gartner hype curve. Whatever they're selling, be it DLP, NAC, DAM, APT prevention or IPS if your firewall works more like a switch and your passwords are all "P@55w0rd" then you've got other problems to focus on first.
- Overinflated problems. Some problems really aren't as big as they're made out to be by vendors, but making them look big is a key part of the sell. Even vendors who don't mean to overinflate end up doing it just because they spend all day thinking of ways to justify (even legitimate) purchases.
- Products as solutions. Installing a product designed to help with a problem isn't the same as fixing the problem, and vendors aren't great at seeing that (some are). Take patch management solutions, there are some really awesome, mature products out there, but if you can't work out where your machines are, how many there are or get creds to them, then you've got a long way to go before that product starts solving the problem it's supposed to.
Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:
- New Attacks. This is somewhat similar to the vendors optimising for "new problems" but not quite the same. When Errata introduced Hamster at ToorCon ‘07, I heard tales of people swearing at them from the back. I wasn't there, but I imagine some of the calls were because Layer 2 attacks have been around and well known for over a decade now. Many of us ignored FireSheep for the same reason, even if it motivated the biggest moves to SSL yet. But vuln researchers and the scene aren't interested, it needs to be shiny, new and leet . This focus on the new, and the press it drives, has defenders running around trying to fix new problems, when they haven't fixed the old ones.
- Complex Attacks. Related to the above, a new attack can't be really basic to do well, it needs to involve considerable skill. When Mark Dowd released his highly complex flash attack, he was rightly given much kudos. An XSS attack on the other hand, was initially ignored by many. However, one lead to a wide class of prevalent vulns, while the other requires you to be, well, Mark Dowd. This mean some of the issues that should be obvious, that underpin core infrastructure, but that aren't sexy, don't get looked at.
- Shiny Attacks. Some attacks are just really well presented and sexy. Barnaby Jack had an ATM spitting out cash and flashing "Jackpot", that's cool, and it gets a room packed full of people to hear his talk. Hopefully it lead to an improvement in security of some of the ATMs he targeted, but the vulns he exploited were the kinds of things big banks had mostly resolved already, and how many people in the audience actually worked in ATM security? I'd be interested to see if the con budget from banks increased the year of his talk, even if they didn't, I suspect many a banker went to his talk instead of one that was maybe talking about a more prevalent or relevant class of vulnerabilities their organisation may experience. Something Thinkst says much better here.
Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:
- Past Experience. Human beings develop through learning and consequences. When you were a child and put your hand on a stove hot plate, you got burned and didn't do it again. It's much the same every time you get burned by a security incident, or worse, internal political incident. There's nothing wrong with this, and it's why we value experience; people who've been burned enough times not to let mistakes happen again. However, it does mean time may be spent preventing a past wrong, rather than focusing on the most likely current wrong. For example, one company I worked with insisted on an overly burdensome set of controls to be placed between servers belonging to their security team and the rest of the company network. The reason for this was due to a previous incident years earlier, where one of these servers had been the source of a Slammer outbreak. While that network was never again a source of a virus outbreak, their network still got hit by future outbreaks from normal users, via the VPN, from business partners etc. In this instance, past experience was favoured over a comprehensive approach to the actual problem, not just the symptom.
- New Systems. Usually, the time when the most budget is available to work on a system is during its initial deployment. This is equally true of security, and the mantra is for security to be built in at the beginning. Justifying a chunk of security work on the mainframe that's been working fine for the last 10 years on the other hand is much harder, and usually needs to hook into an existing project. The result is that it's easier to get security built into new projects than to force an organisation to make significant “security only” changes to existing systems. The result in those that present the vulnerabilities pentesters know and love get less frequently fixed.
- Individual Motives. We're complex beings with all sorts of drivers and motivations, maybe you want to get home early to spend some time with your kids, maybe you want to impress Bob from Payroll. All sorts of things can lead to a decision that isn't necessarily the right security one. More relevantly however, security tends to operate in a fairly segmented matter, while some aspects are “common wisdom”, others seem rarely discussed. For example, the way the CISO of Car Manufacturer A and the CISO of Car Manufacturer B set up their controls and choose their focus could be completely different, but beyond general industry chit-chat, there will be little detailed discussion of how they're securing integration to their dealership network. They rely on consultants, who've seen both sides for that. Even then, one consultant may think that monitoring is the most important control at the moment, while another could think mobile security is it.
The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.