Security Experts:

Avoiding the Insider Threat: How Not to Star in Snowden Part II

Hollywood sure knows how to maximize value. That’s why we have long strings of sequels, prequels and movie plot reuse that stretch across generations. “Star Wars,” “Rocky,” “A Nightmare on Elm Street” or “Harry Potter,” take your pick. Which makes me wonder how many sequels will we have to “Snowden”?

In September, the film “Snowden” debuted to generally favorable reviews. Without a doubt, Oliver Stone’s depiction will have ample opportunity to spawn a Snowden Part II, since we’re are all still waiting to learn the former NSA contractor’s ultimate fate. If it ends with the fugitive’s return to the United States and a trial, that in itself would be grounds for a sequel at the very least, no?

The extended chain of events surrounding Snowden leading up to now has been the stuff of high drama and intrigue, but one of the most interesting aspects of this saga is, despite all we’ve learned from it, how frequently organizations still fall victim to the Insider Threat. 

Evidence that the Insider Threat is alive and well abounds in the never-ending string of data breaches and systems compromises that took place before and since the documents Snowden stole and made public were first published in 2013. Most recently, the 2016 Verizon DBIR reported that Insider Threat was linked to more than 10,000 security incidents, with the public, healthcare and finance industries suffering the most. 

Organizations continue to face this threat for any number of reasons. Maybe it’s due to lack of budget, expertise or knowledge of the latest and most effective tools available to fight it. Or perhaps it could also be that organizations don’t know how to use data effectively to identify when an Insider Threat exists. Most likely it’s a combination of things. Fortunately, most of the factors that cause the threat can be addressed. 

Understand Your Data

To reduce the chances of falling victim to an insider-driven breach, security and risk professionals should start by learning what their available data can tell them. Most organizations with information and systems in need of defense likely already have effective ways to gather data that can point to any Insider Threat occurring. Unfortunately for most, it amounts to a collection of event logs and anomaly alerts that provide little to no insight or context, which allows bad guys to strike and vanish before anyone even realizes they were there. 

But there are now platforms that can be used to ingest the massive volume of information available, recognize indicators and then generate analytics yielding insights that can be used to pinpoint threats before they develop into full-blown breaches. Technologies such as Apache Hadoop have enabled cybersecurity platforms to produce data-driven insights telling organizations exactly what’s happening in their environments. These include everything from who’s accessing what, when and where, which accounts are acting within acceptable parameters, which are exhibiting suspicious behaviors, to whether or not sensitive data is encrypted.

Gain Visibility 

The notion that “you can’t defend what you can’t see” certainly isn’t new in the security world, although the “visibility” message is one that’s been making its way around the media and marketing circuits lately as a redefined term. To some, it means having a clearer picture of what’s happening inside data centers at the application level while to others it means a map of every endpoint. Regardless, visibility amounts to getting a handle on what’s transpiring throughout your entire infrastructure, a holistic view. 

How can organizations gain a holistic view? The simplest answer is through their data. More often than not, data to help organizations see their exposure to Insider Threats is often locked away in reams of logs siloed within different departments that are reviewed infrequently or worst yet, never. When SIEM and other signature-based detection tools spot anomalies, a lack of context often means no one knows there’s a problem until it’s too late or a lot of time and money are spent chasing false positives. 

Rather than rely solely on legacy tools and procedures, security and risk professionals should use tools that can ingest, manage and analyze unlimited volumes of data and then provide analytics that shine a light on the darkest and riskiest activities on their networks. Data-enabled visibility yields insights and analytics that empower humans to identify insider problems before it is too late to do anything about it.

Cleanse your Data

The thing about data is that it’s both critically valuable and at times frustratingly overwhelming. In today’s IT environment, the choice is yours. The data you already have is just waiting to answer almost every security question you have — but only if that data is clean; as we all know garbage data in, garbage data out. 

Clean data yields information about the state of your security and risk posture that can be used to trigger a machine or prompt a human to take actions that defend your networks. Garbage data doesn’t point to where the fires are burning; it’s little more than a befuddling collection of zeros and ones that force people to become digital detectives — a role no one has time to play.

To cleanse your data, use a solution that can not only analyze the data, but also provide information that gives you absolute and accurate knowledge of threats. Logs generated by SIEMs, firewalls and other legacy technologies can’t produce, on their own, the level of granular understanding needed to know where the threats actually are. But when this same data flows into a system that’s designed to spot a needle in a haystack, it becomes data intelligence and something an organization can act on. 

Improvements Coming

I’m often asked if the work we’re all doing to thwart attackers is having any real impact. I answer “yes” because we are, in fact, reducing risk in a measurable way. But the threat never goes away. So long as we have technological innovation, there will be those who use it for both good and evil. But we are moving toward a world where there is enough technology and knowledge available to create reference architectures and risk models that help the good guys prevail. Regardless, however, one thing is absolutely certain: we’re now at a point where any organization can greatly reduce its odds of being cast for a leading role in Snowden Part II.

view counter
Eddie Garcia is an information security architect at Cloudera, a provider of enterprise analytic data management, where he helps enterprise customers reduce security and compliance risks associated with sensitive data sets stored and accessed in Apache Hadoop environments. He was formerly the VP of InfoSec and Engineering for Gazzang prior to its acquisition by Cloudera. He was the chief architect of the Gazzang zNcrypt product and is author of four issued and provisional patents for data security. Prior to Gazzang, he was responsible for Enterprise Architecture projects that helped AMD’s distribution and OEM partners securely collaborate over secure networks with single sign-on. He holds an engineering degree in computer science from the Instituto Tecnologico y de Estudios Superiores de Monterrey.