Security Experts:

When an Incident Happens, Do You Have What it Takes to Respond?

Most Organizations Don’t Have Security Solutions, Processes and Policies That Extend Across Networks, Endpoints, Mobile and Virtual...

To borrow a phrase from the military, security professionals are dealing with the ‘fog of war’: the inability to correlate accurate data that will enable educated decisions. We’ve all been there. We’re in a high stress situation, the clock is ticking, we need to make decisions quickly, but we’re hampered by the information available.

According to the 2011 Enterprise Strategy Group Research Report, U.S. Advanced Persistent Threat Analysis, nearly 60 percent of enterprise organizations are certain or fairly certain they’ve been the target of an advanced persistent threat.

Best Practices for Cyber AttacksWhen an incident happens, they report that their biggest challenges include performing forensics, problem detection/isolation and data collection and analysis. Simply put, they lack the visibility they need to respond quickly and effectively. When faced with a security event there are three scenarios that create uncertainty: lack of information, too much information and misinformation. Let’s take a look at each of these scenarios, common mistakes and best practices to help lift the ‘fog.’

1. Lack of information: Flying blind due to poor visibility and scrambling to piece together data is an impossible situation to be in during the midst of a security incident. Without data you can’t respond. In this scenario, an organization’s mistakes can delay identification of a potential incident until nearly 30 days after the initial attack. By this time, much of the data to support an investigation is no longer available as the malware has already started to destroy its trail. This large-scale, systemic attack had seized on the fact that this organization has many smaller, distributed offices with local IT staff acting on their own with no central management and enforcement of policies and rules. Offices are able to circumvent rules to get their jobs done without gaining authorization from management. In this instance, since large files slowed down the corporate network, local IT staff had created back channels over the internet with no encryption, exposing sensitive financial information to attackers. Because information isn’t shared across silos and with management, identifying the root cause of the incident and establishing a comprehensive response that includes defining and enforcing change request policies took weeks.

Best practices in this scenario include:

• Freeze Systems – Large, distributed organizations must rely on enforcement. Processes should be followed when changes are needed.

• Updated System Documentation – Make sure you have a central repository where you can go to identify with certainty where sensitive data is stored, key stakeholders, systems in use, etc.

• Incident Response Runbook – Step-by-step Incident Response instructions for teams to follow in every location break down siloed, disjointed efforts.

2. Too much information: Many organizations operate with the philosophy that it is better to have all the data possible and then filter through it later. They fall into the trap of creating and deploying rules based on broad triggering conditions that generate an unmanageable number of alerts, many of which turn out to be ‘false positives’ or ‘false negatives’. As a result, when an attack actually happens, they end up wasting time sifting through volumes of data while malware spreads. In this example, an organization is alerted by its external security service to a potentially serious incident but can’t identify the indicator when they research their own internal systems. Bogged down by an overwhelming number of alerts that have created ‘noise,’ they can’t identify true alerts. Furthermore, by capturing too much data, systems can get overloaded and drop packets, losing critical data that could point to real problems. With few controls to ensure the efficacy of the security rules they deploy, the organization is unable to quickly detect real threats.

Best practices in this scenario include:

• Review InfoSec Policy – You need to determine a baseline for the type of data and traffic allowed on the network and what data is critical to protect. Then you can adopt high-value rules that align with policies.

• Custom Rules Acceptance Testing – Poorly-crafted and inefficient rules negatively impact security performance and effectiveness. Rules should be created following a development lifecycle that includes gathering and reviewing intelligence, writing the rule, testing it and submitting it for further approval before putting the rule into production.

• Systematic Program of Review – In today’s dynamically changing environments there’s no such thing as a ‘set it and forget it’ approach. Quarterly reviews of your incident response policies, configurations, rules performance and data are critical so you can assess and tune as needed.

Threat Sharing

3. Misinformation: Inaccurate information generated accidentally or maliciously can lead to an inappropriate response. For example, an organization notices a significant number of connection attempts to a highly controlled and sensitive network segment. The attempts are coming from trusted, internal hosts that don’t have authorized access. While it could be an instance of misconfiguration, it could also be a malware attack. The initial reaction is to block the attempts and drop the traffic. But acting rashly can compound the problem; increasingly malware writers are including a ‘kill switch’ so that the malware instantly destroys all evidence of activity when it senses active countermeasures. If in fact this turns out to be a targeted attack aimed, for example, at gaining intellectual property stored on this segment of the network, only putting a block in place could limit the organization’s ability to conduct reconnaissance and analysis. With no trail to follow, the organization loses its ability to fully investigate the “who, what, when and how” and minimize the risk of reinfection.

Best practices in this scenario include:

• Network and Communications Visibility – You need to be able to see the file trajectory – when the file entered the network and where it went. You also need to be able to monitor all connection data – hosts, ports, protocols, traffic size, etc. – to know more about communications occurring. This data is essential for forensic analysis and investigation.

• Sources of Truth – Make sure you know up front what data is available from which technologies. You don’t have time to figure this out in the midst of an incident.

• Incident Notification and Collaboration Call Tree – Too often individuals make decisions on their own which can lead to rash behavior. Collaboration with stakeholders brings more perspective to a situation – stepping back before reacting can enable more informed decisions.

In each of these scenarios poor visibility and lack of clarity impede the ability to respond. Most organizations don’t have solutions, processes and policies that extend across networks, endpoints, mobile and virtual. Attackers are taking advantage of the gaps to introduce malware.

It’s time to remove the blinders that exist. Lack of information, too much information and misinformation weaken our ability to detect and respond to an incident. To ensure we’re prepared, people, processes and technology must come together – sharing data in a systematic way to shore up defenses and speed response.

Marc Solomon, Cisco's VP of Security Marketing, has over 15 years of experience defining and managing software and software-as-a-service platforms for IT Operations and Security. He was previously responsible for the product strategy, roadmap, and leadership of Fiberlink’s MaaS360 on-demand IT Operations software and managed security services. Prior to Fiberlink, Marc was Director of Product Management at McAfee, responsible for leading a $650M product portfolio. Before McAfee, Marc held various senior roles at Everdream (acquired by Dell), Deloitte Consulting and HP. Marc has a Bachelor's degree from the University of Maryland, and an MBA from Stanford University.