Balancing Risk and Reward in Information Security: Are you Willing to Spend X to Avoid Y?
My daughters tell me that I am too careful and I over-think decisions. I research a car before buying, and build a spreadsheet that includes things like warranties and total cost of ownership for a year. I think, however, that I am just practical.
We make decisions every day, balancing risk and reward, deciding on a particular course of action. Much of the time, we make decisions unconsciously. You decide to speed, accepting the increased risk of accidents and tickets. You decide to eat that fast food burger, accepting the health risks. You decide to smoke, accepting the risk that it causes cancer and a host of other illnesses. The forecast says that it might rain, so you throw an umbrella in the car. Me? I bought a half dozen umbrellas just so I could keep at least one in each car, and another couple in the closet, just in case. Call me risk averse.
Information security is the same way. In the end, how good your security is all comes down to your risk management strategy. This is how well you identify, then manage risk and potential risk in your environment. The real question about risk is “how something can hurt me?” The real question about managing risk is “how many resources (time, energy, hours, focus, funds, etc.) am I willing to spend to make the risk hurt less, and, of course, how much less pain am I willing to tolerate?” This is a risk/reward model, or maybe cost/benefit, and it often it boils down to ROI. Are you willing to spend X to avoid Y?
You can handle risk management in several ways:
1. Accept risk. Determine that the risk of something happening is acceptable, and that if “the bad thing” happens, that it is okay. I continue to accept the risk that my stupid Sunbeam toaster will burn my toast about 30% of the time. Eventually, I will refuse to accept this risk any longer, and buy a new toaster. But for now, I accept the consequences that I either remain vigilant enough to pop up my toast or that I have to throw in another slice of bread.
2. Mitigate risk. Determine that you can take some action to either reduce the chances of the risk being realized, or the impact of the risk can have. This is exactly why I installed a home server that includes four drives, set up in two pairs of RAID 1 drives. My server holds a complete image of each of my home systems, and every computer in my house is configured for nightly incremental backups. I can rebuild any system in my house. But, I only installed the home server after years of living with Option #3, and doing plenty of cursing when my main computer bit the big one with a CPU and main drive failure. I lost so much stuff, it was not even funny.
3. Ignore risk. It seems that, while on holiday, Michael Cohen made a risk-based decision that did not quite go his way (Google “Michael Cohen Fish Hoek Beach”). Nuff’ said.
4. Assign responsibility for risk. You can essentially contract out the risk to another organization, like a provider. You sign a contract that says the provider is responsible for all security and indemnifies you against any loss. Or, you buy insurance for the same purpose – to cover any potential loss due to any risk. Insurance can, however, be expensive, and, we have seen cases where the insurance company has limited payments due to negligence on behalf of the insured party. Uncool.
But, except for number 3, all of these assume that you have first identified the risk. But, what is risk?
Risk is exposure to the chance of injury or loss; a hazard or dangerous chance.
So, what is at risk? Ultimately, the data is at risk. Yes, a system may be at risk, or an application may be at risk. But, in reality, if it is not for the data on the system, or accessed by the application, we would not talk about the risk at all. So, risk all comes down to the data. Follow the data.
So, have you done your Business Impact Analysis (BIA) yet? You can call it whatever you want. I have worked with consulting companies that called it a Data Asset Inventory, or other such related terms, but the premise is always the same. Identify your cool data. Identify where that data sits. Identify what systems, databases, and applications support that data. Identify how your data moves around your environment. Your BIA becomes an evaluation of the criticality of the data, and all of its associated supporting systems.
Once you identify and size your risks, you should develop a plan to actually address those risks. The security controls that you select to implement should either reduce the chances that you will actually experience the risk, or control the risk in some manner to lessen its extent.
The HIPAA Security Rule states quite definitively that your security program must be based on a risk assessment: 164.308(a)(1)(i)(A) Risk Analysis (Required). Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity. 164.308(a)(1)(i)(B) Risk Management (Required). Implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level…
The good news is that, as a covered entity, you have the chance to define your own standards for actually completing your risk assessment. But, the pointis that you gotta do one.
FISMA has, at its core, a Risk Management Framework. FISMA pretty much assumes that your entire security program is built around identifying and actively managing risk. FISMA starts in exactly the manner I have described here:
1. Categorize the information and supporting information systems.
2. Select your initial mitigating controls.
3. Implement your initial mitigating controls.
PCI takes a slightly different view of your risks. PCI assumes that you have the same essential baseline risks as anyone else who uses cardholder data in any manner. As such the PCI-DSS assume you have cardholder data, such as account numbers and information, at risk. The PCI DSS essentially goes directly to the second step of identifying that set of reasonable standard controls that are designed to prevent risks against the target data. This is one thing that PCI does better than most – it understands exactly what the core issue is – the protection of the cardholder data. Follow the data.
Extend this same idea further, and consider that what PCI says is important in the actual security measures that are designed an implemented. If you look through all of the controls in PCI, two of the most important elements are:
1. Data segregation. Isolate your cardholder environment. This is not an exact DSS requirement, but if you can truly segregate your data, it makes everything else easier.
2. Encryption. Encrypt sensitive cardholder data at rest and during any transmission of the data.
Obviously, there is way more to the PCI DSS than this, but I believe that these are the most important, after, of course, knowing that you actually have cardholder data, and where it is actually stored – all that unimportant BIA stuff.
For that matter, if you look at the details in HIPAA and HITECH, they are especially interested in risk and security relating to protected healthcare information. When you consider HIPAA and HITECH, there are three main elements of security controls that have the largest impact on the environment:
1. Data segregation – isolate your PHI from your non-PHI systems. I recognize that, especially in a clinical environment, this is exactly the opposite of easy, but, the more you can segregate/isolate your PHI, the better you are, for security, for compliance, and for general operations.
2. Encryption. Encrypt your PHI. HITECH even goes so far as to call unencrypted PHI “Unsecured PHI.” The result is that there are effectively two completely different set of rules covering the use of “Unsecured” and “Secured” PHI.
3. Breach notification. Arguably one of the biggest portions of the HITECH legislation, but one saving grace is that breach notification is only required in the event of an “unauthorized loss or access of unsecured PHI.” So, that laptop that you lost, which includes 120,000 full patient records? I have two questions for you.
a. Is the laptop encrypted?
b. Do you have reasonable assurances that the encryption cannot be breached? (So, think things like, did the owner tape their password to the laptop?)
If the answer to both a and b are “yes”, this is not technically a breach, since it contained secured PHI. You obviously still want to investigate, and make sure you know what happened to the laptop, how it was lost, and take action to help ensure that this cannot be repeated. But, since it is not a breach, the reporting elements are moot.
Otherwise, did you see a trend between PHI and HIPAA/HITECH? Data segregation and encryption have become two pretty standard security controls that serve well to protect the data, regardless of the exact data. We are, after all, following the data.
But, despite everything we do, there are no guarantees. Risk still exists. Check the exact words here – your controls help reduce the chances something will happen, and help minimize the impact when something does happen. Chances are that you are not really eliminating anything. So, you are living with some risk.
Last night, on my way home in the rain, I should not have been completely surprised when I could not find any umbrellas in my car, right?
Suggested Reading: Are You Gambling with Your Mission-Critical Security Assets?