How Have You Done at Securing Your Network this Year?
2013 is rapidly running out, and 2014 is fast approaching. Security, like every other arena, can benefit from taking a step back to consider what we’ve done, where we’re going, and what we should try to do differently. (For myself, I’m writing this in the air, on what should be my last business trip of the year. One of my goals for next year is to lose status with another airline – we’ll see if I can manage that.)
So ask yourself – how have you done at securing your network this year? It’s not just an idle question – the better we can answer it, the more credit we can aim for in our annual review. Of course, we like to see ourselves on the side of the angels, fighting to protect the critical assets and good name of our employers. The best security professionals I know do have a sense of mission like that, but unavoidably, we’re all employees too, and we’ve chosen a field where appreciation is in short supply. Most businesses barely understand why we’re around let alone what we’re up against or what we do about it. (If you work in one of those rare spots where the value of security is never questioned – good for you. I think my point still applies – if anything, you need to show such a savvy employer that you’re doing what’s critical, not just playing on their fears.)
It’s not easy to measure progress, is it? Measuring risk is famously tough. If we’re serious about it, we’d need some way to correlate breach damage to countermeasures used (or not used), but we can’t just measure ourselves – we don’t suffer enough dire breaches in our one organization. (If you measure just small incidents – spam, minor malware infections, etc. – and imagine you can extrapolate from that to serious assaults, then you’re falling victim to the Black Swan fallacy. Big events are certainly not just like small events scaled up!) Real measures of your likelihood of a breach in the next day, quarter, or year are just not viable – you don’t have data on breaches across enough other companies to be able to assess true hazard rates, and even if you did, you would also need to correct the data based on the actual state of defenses at each organization. In a technical sense, this is possible, but nobody is sharing that kind of data – after all, who wants to disclose to others how well we’re managing our defenses, when the answer is generally going to be very far from flattering?
So what can you do? You can measure readiness, and better yet, you can measure changes in readiness. This is easier, and when we know the bad guys are out there and coming for us, we can reasonably talk to management about our attack readiness, given the inevitability of there being another attack. And we can track changes – are we getting better or worse? The higher up in management levels you go, the more receptive you find they are to relative measures, and deltas. (Why? Ostensibly because the most important aspects of business tradeoffs are often intangible – costs are very crisp, revenue is a little fuzzier given the vagaries of accounting, but critical aspects like customer satisfaction or employee morale or innovation are only measurable in highly approximate ways.)
You can see some interesting examples of the power of focusing on differences from other areas. Up front on the aircraft I’m in right now, there’s an altimeter – a way to measure height of the aircraft. The core technology for these hasn’t really changed much in the history of aviation – it works by sensing the decrease in air pressure as an aircraft goes up. The trouble is that the starting atmospheric pressure – when you’re sitting on the runway – isn’t constant, so all it can really do is tell if the plane is going up or down. That doesn’t sound all that useful, but you can correct it by setting to current pressure before you take off. Worse, if I set my altimeter based on weather where I take off, and you set yours based on weather where you took off, we likely won’t read the same even when we are at the same height! (Up here at high altitude, all aircraft set their altimeters to a fixed, but almost certainly incorrect, value of 29.92 inches of mercury – another residue of the long history of this technique. That means all the planes agree with each other on whether they are at the same height or not, but strictly, the heights reported are almost certainly all wrong!)
Of course, defending an organization isn’t much like flying an aircraft. (As we know all too well, security isn’t just “built in”, and we are often crawling out on that wing to install new protections while our business is in full flight!) Still, the idea at the end of the year of measuring “how much better are we doing?” is a good one.
There’s a common mistake in security – measuring that we’re busy. It’s often the easiest thing to do – our activity leaves records of stuff we did, and when it’s time to report, we can copy/paste all that stuff into a report to get our first line manager off our backs. The trouble is that this same first level manager isn’t really the person we need to show that we’re being effective. This challenge isn’t unique to security either, of course – I can’t tell you how many status reports I see from departments that are really just slightly packaged diaries, with precious little interpretation of whether all this activity is amounting to anything. That, of course, is why executive teams have focused on metrics for so long – just as a way to get away from the useless, but standard, diaries of work busyness.
But forget management’s problems for a second – it’s not even in your interests to lapse into the lazy diary habit, just reporting that you’re important because you’re so busy. Think of it from a year end perspective, or better yet, from next year’s, or the year after. If you’re saying that your busyness is what shows you’re making a difference in security, what do you do next year? Just produce stats showing you’re even busier, obviously! You can see where this is going – if you put yourself on a busyness treadmill, you only have yourself to blame when you fall off the end.
The answer, of course, is to measure something more tangible about your organization’s status – ideally, your readiness for the next attack, not just a “weather report” on how many times attacks rained down on you this year. Again, if you just report the weather, what do you do in the unlikely event that attacks get more rare, or the far more likely event that they continue but get a whole lot harder to detect? Either way, you painted yourself into a corner.
What you need is a way to measure that shows a) not everything is done – because it never will be, b) due diligence has been done – because management wants to see that if they are to accept this remaining risk, and c) you’re making headway. Thankfully, such measures are possible.
For one example – a “free” one – you could look at reporting patching status. Of course, we know that will be ugly – we know our organizations don’t patch as much as we want. But note that we can extract the right lessons, if we think of it the right way – we can show that patching remains to be done (because nobody I’ve run into is anywhere close to applying all the patches they want), that due diligence is being done (because you’re pushing the owners – your diary of work effort has a role here), and that you’re being effective, because we can point to the number of patches successfully applied. (Of course, in truth, this needs to be paired with the number of new ones identified during the year, but you can legitimately say to management that your organization is only responsible for one side of the equation – your response. You’re not responsible for the rate of updates from Microsoft, Adobe, or whoever your chosen software providers are.)
There are downsides to this, of course – vulnerability alone is scary, due to the scale of the numbers involved. You’re likely to be telling management “we’ve only solved 10% of the issues, 90% remain”. That’s why I recommend putting vulnerabilities into context, demonstrating that the 10% you are funded/empowered to fix are the important 10%. Automated assessment software is a great help in putting this better spin on the scary vulnerability backlog.
The main lesson I’m pushing is that you need to write this year’s summary of how well you’ve done with a view to what you’ll want to be talking about next year. If that sounds obvious, let me tell you that most – but thankfully not all – security metrics projects I’ve seen don’t think about this hard enough, falling back into reporting busyness, and setting up the treadmill of doom for next year …
Related Reading: Naughty or Nice – Continuous Monitoring for Year-Round Coal Avoidance