Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Management & Strategy

Breaches: Tales and Lessons Learned

When he was a puppy, my dog watched a lady bug crawl across a window on our back porch. Eventually, the lady bug fell, and as it lay upside down on the carpet flipping its wings and wiggling its legs, my dog snapped it up, licking his chops. He immediately started gacking, and soon spit the lady bug back out, scraping the roof of his mouth with his tongue. Later that day, another lady bug dropped next to him, and he kept a very wary eye on it, eventually getting up and moving, then watching the bug from a few feet away.

When he was a puppy, my dog watched a lady bug crawl across a window on our back porch. Eventually, the lady bug fell, and as it lay upside down on the carpet flipping its wings and wiggling its legs, my dog snapped it up, licking his chops. He immediately started gacking, and soon spit the lady bug back out, scraping the roof of his mouth with his tongue. Later that day, another lady bug dropped next to him, and he kept a very wary eye on it, eventually getting up and moving, then watching the bug from a few feet away. My dog learned from one encounter that he did not like the flavor of lady bugs. He remembers that lesson to this day.

We all like horror stories right? We learn best by examples. Maybe we like to call them “lessons learned” or “case studies,” either way, some details about breaches help, right? Since I have been involved with incident identification and response several times over the past few decades, perhaps I have a story or two to tell.

Stories of Data Breaches My first incident was actually a copyright infringement case brought by a very large software publisher. It seems some of their software was being circulated in the public domain before it was officially released. On their own, they had traced at least some of the leak to one particular suspect. As part of the lawsuit, we reviewed the contents of the suspect’s computer, as well as his email. Finding where the software was on the hard drive was tedious but not difficult. The files were dated from several months before the release of the software. We also noted a complete lack of any hacking tools or software. It quickly became obvious that while the suspect had the software, he had not broken in to anything to get it. We eventually found an email archive that showed when he had received a gzip attachment from the private email of an employee of the software publisher. Oops. This started as a multi-million dollar lawsuit, and ended with the “suspect” removing all of the software and paying some lawyer fees. I never did hear exactly what happened with the employee. I was hoping that when the software giant said they “terminated him immediately” that they actually meant “fired”…

One of the simplest breaches I worked on was a medical device manufacturing firm with a rampant virus outbreak that included dozens of crashed systems. Their initial fear was that they had been hacked, and that the damage was so widespread they couldn’t trust any of their internal systems. They worried that their intellectual property had been stolen. Within minutes we identified that the virus had been delivered around the company via email, from the CEO. The CEO had received a Trojan horse in his yahoo email account, and opened it on his work computer. The Trojan horse downloaded a virus and emailed it to everyone in his email list, which was everyone in the company. Of course, if you get an email from the CEO you read it. In less than an hour that morning, more than 90% of the company had opened and read their email from the CEO. As a result, they were effectively shut down for the better part of three days as they cleaned and rebuilt systems.

One of the most entertaining breaches I worked on was for a high-tech company that was sure that it had been hacked. Their website had been defaced and included postings of the salaries of all of the managers and executives in the company. This one turned out to be a little trickier, as we could not find any aberrant behavior. All logs looked normal. IDS systems showed no breaches. The firewall did not show any attacks. We had spent the better part of the day before we thought to check plain old external logins. We found an admin logon for the server that pretty much coincided with the time that the web page had been defaced. Then we found a VPN logon that immediately preceded that one. This was a VPN logon of a former IT systems administrator, who had been asked to leave 14 months earlier because of gross insubordination. It turns out that in the process of his “departure” no one had ever collected his VPN token, or revoked any of his system accesses. He had just stumbled across his old VPN token in a box at home and decided to see if it still worked. His default login dropped him onto the internal system that also held the list of salaries for key employees. So, while he was there… Unfortunately, the company had no formal termination process at all. It turns out that employees were constantly leaving with corporate laptops, and all of their accesses intact.

The most pervasive breach I have ever personally encountered was a large, distributed manufacturing company. They had engineering and design divisions spread around the country, along with several manufacturing sites. One of the sites, in particular, was their default site for standard production runs. The last day of the month, a batch job would kick off that would copy down a database that included specifications for all of the items to be manufactured the next month. The factory software would read the downloaded database, and schedule out the manufacturing jobs for the month, reordering raw materials in a just-in-time process. All of the sudden, the database did not finish downloading from the engineering site to the manufacturing site. Their system was built to run only with the downloaded data, so the last four days of the month the factory sat idle – they had completed their downloaded orders. Management was irate, but figured it was an aberration, since the process had always worked fine before. The next month, the same thing happened, except the factory was now idle before noon on the 22nd. After many tantrums, investigations were launched.

It turns out that the root cause was the night shift at the factory. The factory’s Internet connection was provided via a dedicated line back through headquarters. The night shift had started browsing so much porn that they took up too much of the available bandwidth for the database to download through the same pipe. It turns out that the night shift employees had all received an email which gave them free subscriptions to an invitation-only adult site. And they all succumbed to the temptation. The subscription was activated by the executable that had come with the email invitation. But, it turns out that the attackers had pulled their email addresses off an internal system to which they already had access. And, since the access was provided via an executable, every one of these systems was rooted. By the time we were looking at their internal systems, we found literally hundreds of rootkits, firewalls with “special rules,” IDS systems that either had “exceptions” or were simply turned off, disabled anti-virus software, and more.

Their internal environment was in such chaos that the attackers actually had more control over internal systems than did their own IT people. We never did determine for sure how the breach had started, but an unpatched, poorly configured IIS server was very involved in the process. It appeared that the breach had really started during a dedicated denial of service attack against the company some eight months earlier. The attack had been effectively hidden in all the noise that was generated by rampant firewall and IDS reporting. The company’s ultimate response was to field new firewall systems on all of their external network connections, but to leave all of the other internal systems running. They added firewall rules to strictly limit inbound and outbound services, and, against our advice, started trying to clean internal systems where they sat. They felt they could simply not afford to take any systems off the air long enough to rebuild them. The few IDS systems they had reported alerts, but they prized operations so far ahead of security that the alerts took back seat to keeping the systems running. Oddly enough, part of the reasons they were struggling with operations was because so many of their systems had been compromised that they could not keep up.

The last time I spoke with them, they had been cleaning systems for nearly three years, and were still fighting new infections and rootkits. In some cases, they had “cleansed” the same systems three or four times. The most interesting part of this for me is that if it had not been for the rampant porn, the company may never have caught this at all. I have a hard time justifying which is worse, operating in open denial, or being truly ignorant of their condition.

Advertisement. Scroll to continue reading.

This last story, which was based on a long-term dedicated attack, seems to me to be the exception rather than the rule. It is purely anecdotal, but my best guess is that for every company I have worked with that had a technical attack like this last one, I could identify at least ten companies that were broken because the CEO opened an executable on his personal email account, or because their termination process was inadequate.

So, it’s good to hear about someone else’s experience (or horror stories). If we can look at what happened to another organization, maybe we can figure out the lady bug tastes bad without actually popping one in our mouth.

Just to make sure we do learn from the story; they are crunchy, bitter, and taste kind of flowery…

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

Professional services company Slalom has appointed Christopher Burger as its first CISO.

More People On The Move

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

CISO Strategy

SecurityWeek spoke with more than 300 cybersecurity experts to see what is bubbling beneath the surface, and examine how those evolving threats will present...

CISO Conversations

Joanna Burkey, CISO at HP, and Kevin Cross, CISO at Dell, discuss how the role of a CISO is different for a multinational corporation...

CISO Conversations

In this issue of CISO Conversations we talk to two CISOs about solving the CISO/CIO conflict by combining the roles under one person.

CISO Strategy

Security professionals understand the need for resilience in their company’s security posture, but often fail to build their own psychological resilience to stress.

Management & Strategy

SecurityWeek examines how a layoff-induced influx of experienced professionals into the job seeker market is affecting or might affect, the skills gap and recruitment...

Cybersecurity Funding

2022 Cybersecurity Year in Review: Top news headlines and trends that impacted the security ecosystem