Security Experts:

Breach Forensics: Keeping Things from Going from Bad to Worse

Dealing with the Aftermath of a Breach Is Challenging, but there are Things you Can do to Keep Things from Going from Bad to Worse

Breaking into an enterprise’s digital vault is half the battle; the other part is getting away clean. If the arrest rates for hacking are any indication, that second part has become something of a science in the cyber-underground.

Yet there are still things organizations can do in the aftermath of a breach to keep things from going from bad to worse, and to help track any footprints attackers leave behind.

Investigating BreachesThe recent breach of Mitsubishi Heavy Industries in Japan underscores the challenge businesses are facing. While some have laid the blame for the attack at China’s doorstep, the country has denied any involvement. Oftentimes, attackers will blast out emails from compromised computers to target users, explained Catalin Cosi, head of BitDefender’s Online Threats Lab. By using a botnet and controlling it via TOR, the attacker is already covering his or her tracks, he said.

So begins the game of catch-up for enterprises.

“The same issues are true at both the scenes of major cyber crimes and real-life crimes,” Andrew Brandt, director of threat research at Solera Networks, told SecurityWeek. “Network admins have the ability and access to accidentally (or deliberately) destroy or corrupt evidence in the process of trying rapidly to resolve a breach situation, just as a helpful bystander may complicate an investigation of a shooting by stepping in the spilled blood of a victim. It's also possible that evidence an administrator could use to point to the origin of a breach, or to determine the root cause of some sort of malware infection, isn't even being collected in the first place. The ability to collect and maintain an [incorruptible], complete chain of evidence is key to revealing the scope of any breach.”

Proper incident handling also requires strong malware analysis, opined Dov Yoran, CEO of ThreatGrid.

“To determine the threat to an organization the malware should be deeply analyzed in sand-boxed environments,” he advised. “This enables the analyst to determine specific malware ex-filtration methodologies and aid in remediation efforts. Malware is dynamic and communicates with many known hosts, URLs. Sandbox analysis results can be used to create dynamic block lists. These dynamic block or monitor lists should be employed at the perimeter to limit exposure and detect malware ex-filtration. The solution for detecting exiting data should also include network based DLP, endpoint DLP and engines which search the web for organizational data.”

Once the infected systems on the network have been identified, administrators should take those systems offline and image them in a way that forensically captures how they have been modified, Brandt said. The infected or compromised system should then be replaced with a clean image, and whatever information can be gleaned from network security analytics should be used to determine how the system was compromised in order to “close those loopholes with patches, changes to the system configuration, and close monitoring of the state of that system for some time after it comes back online.”

“If an attacker has managed to breach a database or is able to retrieve information remotely, it's sensible to pull the network connection from that database or server temporarily, and image it as well,” Brandt continued. “Forensic analysis of the network traffic can clearly delineate the extent of a breach. When high availability is needed, for example in the case of a busy Web server or database server, the network security analytics can point to the domains, IP addresses, or other exfiltration points where the attackers managed to pull down information. Those destination addresses can be added to the firewall to prevent compromised internal systems from making outbound connections to hostile attackers, severely limiting further data loss, while the analyst determines the nature of the breach and how to correct it.”

Log analysis is also a critical part of investigating a breach, but individual system log retention policies can come up short if Windows system admins use default settings for logging, Yoran told SecurityWeek. This limits log retention based on the size of the log, and can lead to logs being overwritten and evidence being lost, he said.

“The solution is to measure and set an appropriate retention policy on the server and implement a log aggregation or security event management solution,” Yoran advised.

Logs however have an inherent limitation – they are only as accurate and complete as the intelligence of the devices that generate them, Brandt noted.

“System log data is only one place where an attacker may leave a mark, and it's so obvious that it would be the first place any sensible network admin would look, that log data also, not coincidentally, ends up being the first place a smart attacker would go to hide his or her tracks, presumably by selectively deleting or modifying log entries that would indicate the presence of a breach,” he said.

The most meaningful data, Brandt said, will generally come from information that leads to the source of the breach - such as a spear phishing e-mail that led someone to download malicious code – as well as any reconstructed malware executables, the data’s exfiltration point, the identification of compromised systems and the nature of the data itself.

view counter