Security Experts:

What Is "Good Enough" Security?

Have you seen the spat between Wyndham Hotels and the Federal Trade Commission?  Wyndham already suffered a painful and public breach, dating back to 2008.  The FTC filed a lawsuit over a year ago, saying that Wyndham had not done enough to prevent it.  Wyndham’s position is that the FTC has no authority to police their security practices.

I don’t want to wade too far into the legal issues – I think the pragmatic security questions are much more interesting.  The FTC is saying that Wyndham did not follow appropriate practices, but Wyndham counter-claims that the FTC has never published any guidelines about what those might be (and they appear to be right – the FTC has conceded this point).  Whichever way the case evolves, this part of the dispute – about what is appropriate – certainly got my interest.

What are “appropriate security practices”?  Who gets to decide?  There were some moves to legislate – remember the failure of the Cybersecurity Act of 2012, and the following Presidential Policy Directive 20?  But those seem to be going nowhere – I could say they have taken a back seat to other political crises, but it’s not clear this topic is even in the back of the bus any more.

The FTC’s press release on the case suggests that Wyndham’s “security practices were unfair and deceptive”.  Maybe I think too much like a practitioner, and not enough like a lawyer, but I don’t even know what that means.  In my line of work, I get to see a lot of security practices, and they can be a royal mess, causing serious risk through inattention to vital details (like, say, whether or not the front door is closed).  But “unfair”?  I don’t see how that concept even comes into it.  If I buy a car, and I get into a fatal accident, could my heirs sue the manufacturer for “unfair” safety practices?  I hope not.

That said, we do know that Wyndham had problems – the fact that they got breached, found out about it, and then later got breached again in a roughly similar manner suggests they didn’t do all you’d want a business to do, when they have your credit card on file.  But that’s not the interesting question the case has now raised.  This isn’t just about whether the breach happened, or whether it could have been prevented.  It’s a fair guess that if the hotel had spent 99% of their annual profits on security, and used it reasonably sensibly, they could have prevented all kinds of nasty issues, including the breach that occurred, but so what?  The hotel wasn’t going to do that in advance, and even in hindsight, there’s only so much they are willing to do, as a for-profit business, to reduce risk.

This is about what constitutes “good enough” security.  Should Wyndham have done more?  We can’t just say “well, they suffered a breach, so they must have been lax”.  That would be like observing that an airplane crashed, and concluding that the aircraft manufacturer didn’t take due care.  That wouldn’t make any sense, even though our emotional selves tend to want to think that way after a memorable incident.  (One academic name for this is “base rate neglect” – the ease with which we forget the background facts when presented with a compelling, recent example of something rare.)  So long as we stop to think about it, we know that air travel is amazingly safe, that the manufacturers test the living daylights out of their designs, that mechanics and pilots are highly trained, highly diligent, and highly safety conscious. Even with all that, we know that some airplane crashes will occur – just not very many. So taking one crash and saying “the manufacturer must have made a faulty plane” is unreasonable (if tempting).  Likewise, whether an organization has or has not suffered a breach does not immediately prove whether they have or have not done enough in their security practice.

I think the airplane analogy is worth pursuing a bit further.  How much testing do we want in our aircraft?  Should aircraft manufacturers bankrupt themselves making them safer?  Nobody wins in that outcome, so we have to be ready for “good enough” airplane safety.  (Curiously, our standards for testing and safety around aircraft seem seriously out of whack with the same standards for automobiles – even though it’s perfectly clear we could save more human life if we focused more on cars.  We’re funny about some issues – often, things that give us a feeling of control or freedom translate into a false sense of safety, when in many cases handing control to a trained professional is demonstrably safer.)

But the airlines and the manufacturers know that crashes will occur.  So what do they do about that?  They plan for it.  They have crash investigation teams.  In IT security, we get that part – we have incident response and forensic teams.  But airplane makers do one other thing too – they test, and test again, and they keep records.  Is that because they are all geeks sporting pocket protectors, and are no fun on a Saturday night?  Well, perhaps, but the tests and the records serve another vital purpose: self-defense, when the hindsight blamestorm rolls over the hill next.  If you think about it, it’s obvious this will happen – no matter how safe you make your plane, there will be a failure, and someone will try to blame you for it.  What you have to plan for is demonstration that you were diligent.

That’s point one – the immense value of test records, when the inevitable problems arise.  IT Security teams often get sloppy at this point, focusing instead on shiny, fun things.  Chasing ‘sploit’ news of the day, or scanning the sensor feeds, is much more in keeping with our short attention span culture.  If the testing and record keeping is done manually, it’s really tedious, time-consuming, and unsexy.  The good news is it is largely automatable – not just the testing, but nowadays, the evaluation of the testing.

That bridges to point two – thinking again of the aircraft engineers, all those records solve one problem, but at the same time they create another massive one.  If you test a machine to destruction, you know when it fails.  Someone can come to you later and say “so you knew the tail could fall off if subjected to X psi?”, and you have to say yes – you kept records that show that you knew.  Knowing isn’t enough – piles of facts are just piles of facts.  They have to be evaluated – assessed for risk.  If you just show that you’re testing by making a mountain of records, then you may pass an annual audit (since they most often only check that you’re going through the motions, not that anyone has any clue what it’s all for).  But you create a massive trap for yourself if you don’t analyze the meaning of the data.  You’re making perfect grist for the blame mill – records that show you “should have known”, whether you actually did or not.  One of my customers refers to “looking for a needle in a needle-stack” – if a bad guy gets in, and the vulnerability was documented in your own records, how are you going to defend yourself that you didn’t know?  You can’t possibly read all the phone books of data that come out of your automated security assessments.

So to bring it back to Wyndham’s excellent question, what is “good enough” security?  I suggest it’s more than running a passably sound infrastructure, and it’s more than accumulating data about the security defects in that infrastructure.  You know you’re going to find more problems than your security team is funded to fix.  Worse, you know that your efforts will include at least some defects that will come back to bite you later, precisely because you didn’t fix everything.  For the sake of your business, you need to fix the most important issues first – you need to know which defects are most likely to lead to a breach.  For the sake of your own career, you need to demonstrate that you processed the phone book after phone book of data that comes down the conveyor belt at you.  Woe betide the security professional whose mailbox contained the details of the vulnerability that was used in a breach, even if it’s buried in megabytes of surrounding noise.  The blamestormers won’t care about that – the facts were there, you should have known.

The good news is you can automate all this.  You can feed the phone books into automated risk assessment, which is good because it answers “what next?”.  But let’s be frank: it’s also great because it answers “why not?” – for each of the vulnerabilities you couldn’t fix, because there is only so much political capital, and so many hours in a day, you’ll need to show why you didn’t address it, even though you were informed.  Politicians go to spectacular lengths to engineer shields of blame avoidance.  In IT Security and Risk Management, you can build a force field using automation software, and you’re going to need it, because the hindsight goons are coming.

view counter
Dr. Mike Lloyd is Chief Technology Officer at RedSeal Networks. He has more than 25 years of experience in the modeling and control of fast-moving, complex systems. He has been granted 20 patents on security, network assessment, and dynamic network control. Before joining RedSeal, Dr. Lloyd was CTO at RouteScience Technologies (acquired by Avaya), where he pioneered self-optimizing networks. Lloyd was previously principal architect at Cisco on the technology used to overlay MPLS VPN services across service provider backbones. He joined Cisco through the acquisition of Netsys Technologies. He holds a degree in mathematics from Trinity College, Dublin, Ireland, and a PhD in stochastic epidemic modeling from Heriot-Watt University, Edinburgh, Scotland.