Security Experts:

User Errors Are More than an "Oops"

Call them errors behind the keyboard, cockpit errors, or wetware errors. Sometimes we just say “oops.” But, these terms do not do them justice. In reality, whatever you call them, they are the bane of our existence. Studies show varying ranges on how many of the vulnerabilities, errors, and subsequent data and/or system compromises we see because a user or admin did something wrong. But, do we have any sense for how negligent we, as users, really are?

Human ErrorAccording to a poll conducted in the UK, users accidentally left an estimated 17,000 USB memory sticks in their clothes when taking them to the dry cleaner. That’s one memory stick for approximately every 3,647 people in the UK. And, this is just memory sticks left at dry cleaners. Consider the small fraction of people who actually use dry cleaners and we have to believe that the actual number of lost memory sticks in the real world is much larger than 17,000. If we apply the same ratio to the United States population, we’d have 85,000 lost memory sticks. If these 85,000 memory sticks average out to 2GB each, that is a total of 170TB of data. That is enough space to hold the entire printed collection from the Library of Congress, 17 times. How much corporate data, research and development, or personal private data was on those drives? We will never know for sure, but with that much storage gone, we can assume it was a lot.

What security protection could prevent this problem? Your trusty firewall or anti-virus would not have helped. Obviously an encrypted drive would make a difference, but really, how many of those 17,000 flash drives were actually encrypted? Maybe twelve? The most effective protection we have against a user losing something would be simply to not lose it in the first place.

Take a look at the data available for healthcare data privacy breaches reported on the www.hhs.gov Web site for the period of September 2009 through November 2010. As of March 3, 2011, the site showed 241 reported “incidents” that resulted in the breach of the personally-identifiable health information of more than eight million people. If we take a look at the type of breach, we can see that “improper disposal,” “loss,” and “unauthorized disclosures” that can be directly attributed to user error (manual processes that absolutely rely on a user doing the right thing), account for 59 incidents that cover over 1.3 million people. Reported “theft” accounted for another 118 incidents affecting about 5.3 million people. In my experience, a significant number of users would rather report something “stolen” than “lost” as it potentially helps make them less responsible. So if we say just 20% of the “theft” was actually “loss” (including the counting of “theft of an unencrypted laptop from my car” as partial user error) we get about 83 incidents of personal user errors that accounted for the loss and potential compromise of the healthcare information of around 2.4 million individuals. If the numbers are reasonable, that means more than 30% of the reported breaches were due to user errors. Keep in mind that the HHS website only includes breaches of 500 or more individuals, and breaches that were actually detected. So we really have no idea how many breached records are not covered by the reported information.

Among other things, www.privacyrights.org gathers and accumulates breach information from a variety of sources (including the HHS site from the previous paragraph). As of March 3, 2011, the site tracked 2,374 reported data breaches, covering a total of 515,002,269 records since 2005. Let’s continue to assign at least partial responsibility for any breach that was described as something like “employee left the unsecured medical records in an unlocked vehicle.” If we review just the reaches identified and reported in 2011, we see a total of 102 reported breaches that affected just over 3 million records. Part of what makes this information disturbing is the fact that in 37 of those 102 breaches, the number of records affected was identified as “unknown,” meaning they are not included in the 3 million “known” records.

If we look at which of those breaches is related to “user error,” we can count 30 incidents, 10 of which show an “unknown” number of affected records. Of the 20 with actual counts, we see about 2 million records compromised due to user error-- between January 1st and March 3, 2011 alone.

I will now slightly abuse these numbers, based on the following assumptions:

1. The “unknown” incidents affected approximately the same ratio of records as the “known” incidents.

2. The number of incidents remains roughly constant throughout a given year.

If these assumptions are the least bit accurate, we see just about 30 incidents affecting approximately 3 million records every two months, for an annual total of about 180 reported breaches of roughly 18,000,000 records. Ponemon Institute publishes an analysis of the cost of reported security breaches, available here. The analysis indicates that a data breach in the United States costs the organization about $204 per breached record. For 18 million records that is almost $3.7 trillion. Read that again – $3.7 Trillion. Yes, with a “T”. That is roughly the equivalent of the Gross Domestic Product of the entire country of Germany.

The worst part of this is that while we like to say that “accidents happen,” most of these really could be avoided. When you see incidents reported like “an employee emailed 2,400 records to her personal computer, and four other people”, and “nearly 50 boxes of medical records, Social Security numbers, addresses, etc., were found in a paper recycling dumpster behind a library,” it is really hard to argue that these were anything other than user errors. Unfortunately for the breach count, we cannot take people out of the equation. So what can we really do? There is no great answer here other than “do better.” There is no such thing as a human firewall that will stop people from doing these things, and no incident detection system in the world is going to see that employee emptying his trunk into the dumpster. So we will continue to have errors behind the keyboard. However, we can be more dedicated to doing what we know we should be doing, including:

1. Have a formal Security Policy, which includes guidance on data classification and handling.

2. Clearly communicate the policy requirements to all employees. Not most employees, all employees.

3. Not once, but on a nice regular basis, like annually.

4. Give someone formal responsibility (and authority) to update training and conduct of any “new employee” as well as refresher training.

5. Include horror stories, and make them relevant. With a bit of research on some of the places I list above, you will most likely find examples that will fit your organization. If you have internal incidents, you obviously want to think about how much you want to reveal to your own staff. However, having a concrete example with which employees can identify can be invaluable.

It sometimes seems hopeless, but with a little diligence, we can be better. And, looking at these types of numbers, we have to be.

Subscribe to the SecurityWeek Email Briefing
view counter
Jon-Louis Heimerl is Director of Strategic Security for Omaha-based Solutionary, Inc., a provider of managed security solutions, compliance and security measurement, and security consulting services. Mr. Heimerl has over 25 years of experience in security and security programs, and his background includes everything from writing device drivers in assembler to running a world-wide network operation center for the US Government. Mr. Heimerl has also performed commercial consulting for a variety of industries, including many Fortune 500 clients. Mr. Heimerl's consulting experience includes security assessments, security awareness training, policy development, physical intrusion tests and social engineering exercises.