Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Incident Response

Can We Learn from Big Breaches?

Starting with the Target breach from the fall of 2013, it seems like the last year or so has almost been “The Year of PoS Breaches.” P.F. Chang, Michaels Stores, United Parcel Service and more. If you are a security professional in retail, this is probably your current nightmare – having your phone ring with the message that YOU have been hacked.

Starting with the Target breach from the fall of 2013, it seems like the last year or so has almost been “The Year of PoS Breaches.” P.F. Chang, Michaels Stores, United Parcel Service and more. If you are a security professional in retail, this is probably your current nightmare – having your phone ring with the message that YOU have been hacked.

Which is why I was a little speechless when a reporter contacted me about a recent breach and asked, “So is there anything new here or is this just another breach?”

“Just another breach.”

Are we at that point where the breach of the POS system at a national retail organization and the compromise of a large number of credit cards is “just another breach?”

Incident ResponseBecause these breaches are a big deal to the organizations affected, and to the consumers affected. We should be taking them seriously, and they should never be “just another breach.” Ever. So, one way we can take them seriously is to think about what we know about the breaches and see what we can learn from them. Based on what we KNOW, can we do anything different to protect ourselves?

Some of the breaches have come with a decent amount of information about how the breach occurred. For others we dont have as much information. To do a true lessons learned, we would study one or more of the cases in detail and review their controls to see what failed. But if we dont have the level of detail to do that, or we don’t have the insider knowledge, can we still look at some of the things which we know the breaches have in common? Can we make some conclusions about a set of controls which retail organizations should evaluate in an attempt to reduce their attack profile? Can we learn anything about what we do know to try to reduce the chances that another organization can be successfully attacked, or try to minimize the damage if they are?

1. Network Segregation

We know at least some of the breaches started in one part of the network and spread across the internal network in a serious of compromises before the breach reached the POS systems. We continue to see organizations which are internally flat, or have weak controls to protect internal networks from each other.

What if we identified our important systems – our cool data and systems, and protected those in subnets which we genuinely protected with firewalls and robust ACLs. Some organizations do this but I expect they are in the minority. I once did some work with a hospital which had two separate networks. Clinical systems were on the red network. The cables were all red, and travelled through red conduit. The white network supported the office environment, allowed Internet access and supported email. No system was connected to both the red and the white network – they were truly air gapped. That is pretty good network segregation. What were the chances that a virus downloaded from the Internet could reach the red network? Well, it was theoretically possible, but not without physical interaction – someone would have to physically move it via drive or memory stick, but “red” systems which supported memory sticks were highly controlled and strongly monitored.

Advertisement. Scroll to continue reading.

It worked in that environment, and gave them confidence that their clinical systems were truly protected. Does everyone need to go that far? Probably not. That might not be a practical solution for many organizations.

But the identification and isolation/segregation of critical systems is, in general, a great practice. Have you truly identified all of your HIPAA systems or your entire cardholder environment? Yes – ALL of them. Effective segregation assumes you have identified your critical systems (Hint: Have you done a Business Impact Assessment?).

2. Anti-Virus Controls

We know at least some of the breaches included malware as either a root cause or to extend the attack laterally within the company. I believe we also know that many included some variety of point of sale (PoS) malware which, in the end, grabbed credit card information. So, “anti-virus” becomes another strong candidate security control for review.

But anti-virus is problematic, right? Based on reports weve seen over the past few years we figure that anti-virus catches, at best, maybe 50% of viruses in the wild. But there is more to that number than we often appreciate. A significant percentage of the intrusions I have seen include malware installing on a system which had no anti-virus running or had an outdated version. I have had clients tell me that they do not install anti-malware anywhere internally since they scan all incoming email and they figure they are protecting their perimeter. I am able to keep my head from exploding, but – oy.

A system like that is not going to catch 50% of anything, especially when an attacker can see what you are running and provide you with a malware option specifically designed to avoid what you have. And, besides which, what is one of the first things an attacker often does when installing malware in your environment? Yeah – turn off your anti-virus. I suspect we all know that anti-virus which is out of date, or turned off, is going to catch a lot less than 50% of viruses.

 So, anti-virus isnt a great answer, but it will help.  But you can’t just “have” anti-virus, you have to take your anti-virus solution seriously.

a) Install anti-malware on every end-point system.

b) Verify that the anti-malware is current (engine and signatures).

c) Verify that the anti-malware is actually running. (And, keep verifying regularly!)

3. Patching

Patching is boring. Like stamp collecting (apologies to the philatec inclined). But consider what we know about patching and upgrading our outdated systems. In environments studied for the 2013 Solutionary Global Threat Intelligence Report, 49% of the vulnerabilities found on systems tested were at least two years old. And some were as much as 10 years old.

Yeah, 10 year old, unpatched vulnerabilities.

If it takes an organization more than two years to apply an available patch to a known vulnerability, doesnt that say something about how effective businesses are at closing holes? Does it take two years for attackers to identify vulnerabilities and start taking advantage of them (okay that was a rhetorical question)? The reality is that old systems have vulnerabilities, and if you are not patching those vulnerabilities, your attackable footprint is growing, not shrinking. Heres a hint: How do you think attackers feel about finding old systems which include unpatched, six year old vulnerabilities for which they have had a very mature, very effective exploit for about five and a half years? Can you say “child in a candy shop?”

We know at least some of the recent retail breaches were assisted by the internal availability of old, unpatched systems which included well-known vulnerabilities. If (insert old application or service here) had been phased out in favor of more current systems, or at least had been patched to more current versions, would the lateral movement of the breach have even been possible? We will never know, but we can suspect that, at the very least, it would have been more difficult, and potentially noisier. And “more difficult and noisier” would most likely have increased the chances that a breach would have been detected earlier.

Maybe. At least if we are looking for it.

4. Monitoring

Some of these breaches had been in process for quite some time. Initial system compromises sometimes occurring months before the breach became a known threat. Some of these breaches had been reported by malware and IDS systems but ignored.

Again, we cannot say with certainty, but it appears that there was a significant amount of internal compromise at some of these breaches. The breaches typically included compromise of a variety of systems distributed across the organizations internal network. Chances are that the spread of those compromises were conducted through a variety of network attacks. And, that those systems were involved in communications (downloading of malware, uploading of data) for some time. What do you think the chances are that those communications could have been identified by an active monitoring system which had initially baselined the victim environment? The monitoring builds a profile of what the organizational environment looks like and suddenly the environment looks differently? Systems which had never spoken with each other are suddenly exchanging large amounts of information? Or multiple distributed systems are suddenly talking with a few centralized systems? Or suddenly, previously quiet internal systems are talking to external systems. If you were watching for it, that aberrant behavior would raise some flags. But if you aren’t looking, you are not going to see it.

5. Active Response

The first time I ever worked on an incident response engagement, one of the first things I found out when I got onsite was that the targeted organization effectively had no response management plan. Some of their techies had some thoughts on how to mitigate damage, but they had no organizational plan on how they were truly going to react to the attack and restore normal operations. Their first “response” activity was to have a meeting to start talking about how to deal with what they were pretty sure was a breach, but there was even some debate about that.

As a matter of fact, in about 75% of incident response engagements my company sees, the client does not have a functional plan. Just consider two options for an incident response plan:

a) Predefine an incident response plan. Define the incident response team, along with their roles and responsibilities. Document contact information for outside contacts, like tech support at your ISP, and define how they fit in the process. Define any skill sets which may be required but do not exist within your organization, and how they will be included. Define your communications process – how you are going to effectively communicate during the incident. Define criteria to declare when an incident has started, as well as when the incident has ended. Then test everything to make sure it all works. Keep in mind that this is a dramatic simplification of a real incident response plan.

     Then, when an incident strikes, you execute the plan, tweaking the things which don’t fit perfectly. You contact the predefined team who sets up appropriate communications and take action against the breach. Technical response can begin almost immediately, stop the attack, and transition into both evidence gathering and restoration of operational capability, as appropriate. Your actual response starts within minutes.

b) Or, you do not predefine an incident response protocol, and your technicians spend hours on conference calls blaming each other and trying to figure out what is happening, and management starts talking about which systems to worry about first, while you try to define how you are going to recover in the middle of a high stress, critical situation. And, that means doing all that stuff which would have been in the plan anyway, but doing it after the fan has already been hit.

Does one of those two options sound more effective? Good, because it is.

Can we Learn?

Yes, it is obviously more complicated than a “Top 5” list, and we don’t have perfect answers. As we always say, there is no magic bullet which will make an organization impervious to cyberattacks. But, based on high level review of the types of breaches we have seen over the past year, it certainly seems reasonable that if we consider a short list like this, we should be able to see opportunities to make our environments more resilient to attack. And if we can make attacks harder, noisier, less invasive and easier/faster to recover from, that sounds like progress.

 

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Shay Mowlem named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

Shaun Khalfan has joined payments giant PayPal as SVP, CISO.

More People On The Move

Expert Insights

Related Content

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Data Breaches

LastPass DevOp engineer's home computer hacked and implanted with keylogging malware as part of a sustained cyberattack that exfiltrated corporate data from the cloud...

Incident Response

Microsoft has rolled out a preview version of Security Copilot, a ChatGPT-powered tool to help organizations automate cybersecurity tasks.

Data Breaches

GoTo said an unidentified threat actor stole encrypted backups and an encryption key for a portion of that data during a 2022 breach.

Application Security

GitHub this week announced the revocation of three certificates used for the GitHub Desktop and Atom applications.

Incident Response

Meta has developed a ten-phase cyber kill chain model that it believes will be more inclusive and more effective than the existing range of...

Cloud Security

VMware described the bug as an out-of-bounds write issue in its implementation of the DCE/RPC protocol. CVSS severity score of 9.8/10.