The Threat Landscape is Evolving Faster Than the Usual Rate of Security Review
Leveraging threat intelligence to improve an organization’s security posture should be an essential component of any security strategy. So as I spend time with organizations from around the world to discuss their security challenges, I am surprised to learn how few do this.
Throughout the year, security-focused organizations produce a growing number of threat reports – and they come annually, quarterly, monthly, weekly, and even daily. These reports often contain critical information about the latest trends, targets, and tactics being used by the cybercriminal community. In addition, active threat feeds from security researchers, vendors, and regional and vertical organizations can be leveraged by tools such as SIEMs and integrated into SOCs to ensure that systems are continuously tuned to the latest threat trends.
Analyzing threat trends – especially those collected from live production environments – can provide security professionals with insights into how to better protect their organizations from the latest cyber threats.
Cybercriminals work in packs
One of the most interesting insights gained from looking at recent data collected during Q1 of 2019 (PDF) is that cybercriminals tend to work in unorganized packs. If an exploit or attack vector seems to have worked for one criminal, you can safely assume that there will soon be a swarm of attacks targeting the same thing. That is a high-level trend that anyone familiar with security can see.
Large scale pack behavior
For example, WordPress is the world’s leading CMS (Content Management System) solution, used to build hundreds of millions of websites. Because data stored in websites – such as mailing lists, media galleries, online stores, and especially financial credentials used in shopping carts – have a high black market value, WordPress is a frequent target of attacks. So, when we recorded more than 100,000 attacks targeting that application in Q1, it wasn’t a big surprise. But we also recorded a spike in attacks targeting CMS systems from other developers, including those who develop third-party plug-ins.
That information may have been overlooked by some analysts, however, because the total number of attacks targeting each CMS was comparatively lower than the number of attacks targeting the biggest player in the space. Unfortunately, anyone running another CMS solution that didn’t also raise their shields may have been caught unaware.
Granular pack behavior
This pack behavior tendency is not just limited to large attacks spilling over into related areas. It also seems to occur in some of the more granular details of attacks.
For example, nearly 60% of threats analyzed in Q1 of 2019 shared at least one domain that was used at a particular point in an attack chain, such as where malware is distributed from or where stolen data is collected. Many attacks also tend to use the same web providers over and over again. We gain two crucial insights from this information.
The first is that cybercriminals pay close attention to each other. They interact on dark web forums, share code and strategies, and even reverse engineer each other’s tools. And when stuff works, whether it’s an obfuscation technique or a web domain provider, it gets reused.
Second, these patterns, which are often included in any comprehensive threat analysis source, can be used to identify attacks. If traffic seems to be moving back and forth between a device and a web domain, and that domain is hosted by a provider that is frequently used by cybercriminals, it may be worthwhile to monitor that traffic more carefully.
Other Interesting Trends
Phishing continues to be a critical attack vector
When comparing attack information collected from web filters, researchers discovered that the vast majority of pre-compromise activity occurs during traditional work hours. Post-compromise traffic, however, seems to show little differentiation as the day or time. Further analysis showed that this is because the vast majority of attacks still require some sort of action on the part of an end user, such as clicking on a malicious link or opening an infected attachment. Once an attack has successfully breached a system, however, malware rarely requires additional user input. While some C2 applications prefer to hide in the noise of daily work time traffic, most don’t differentiate at all.
While there are several approaches to addressing this challenge, most organizations should consider improving their security awareness training, so end users can quickly identify this common attack vector. SOC and NOC sensors should also remain on during evenings, holidays, and weekends because C2 traffic will be less likely to hide in network noise as regular traffic will be significantly reduced.
Attackers are performing thorough reconnaissance before attacking
Once the poster child for broad and indiscriminate attacks, ransomware attacks are now being tailored to the resources and security systems of their intended targets. Recent examples include an attack that didn’t bother to include any obfuscation techniques because the criminal’s initial analysis showed that security past the perimeter would not detect the ransomware.
Anomalous behavior detection and intent-based segmentation are essential in addressing these sorts of attacks. Abnormal behaviors, such as scanning and probing a system, need to be quickly identified and traced to their source, and networks need to be intelligently and intentionally segmented so that even if a breach does occur, attackers are limited to a narrow set of resources. And as always, systems need to be backed up regularly, backups should be scanned for malware and stored off network, and organizations should run backup recovery scenarios to ensure that the backup process is working, that backups are relevant, and that the process works as planned.
Attackers are increasingly living off the land
Living off the land refers to a growing trend of cybercriminals to use legitimate tools already installed on target systems to carry out their cyber attacks rather than loading their own malware, which is more easily discovered. Because they leverage approved tools and processes, living off the land tactics are much harder to detect.
One of the most popular tools being exploited by attackers is PowerShell, with a high volume of attacks detected during the first three months of the year. PowerShell comes pre-installed on Windows machines, making it a convenient target. However, as discussed, since attackers tend to move in packs, we have seen spikes in attacks targeting other tools, especially those used for management and analysis of systems.
Part of this is the result of digital transformation efforts, where modernizing networks includes adding new tools and processes to manage the increase in devices and applications being added to the network. As a result, administrators must inventory and track such tools deployed in the network, and regularly cross-reference that inventory against known exploits designed to compromise those tools to collect system information, modify processes, escalate privilege, generate attacks, and evade detection.
Tactics for Success
Organizations also need to understand that the threat landscape is evolving faster than the usual rate of security review. Leveraging threat intelligence resources is a critical element of any security strategy. However, to be useful, intelligence needs to be proactive, dynamic, and available throughout the distributed network. That requires having a system in place where external intelligence can be correlated against real-time internal threat intelligence, enabling the entire network to stay abreast of changes in the threat landscape.
This approach helps IT security teams see new attack methods and then zero-in on those places where criminals are focusing their efforts. The development and adoption of a unified security architecture strategy further enables this visibility and control to span the distributed network, and should be augmented with segmentation, thereby not only accelerating decision-making, but also by closing the gap between detection and mitigation because individual rogue devices can be isolated and assigned for remediation.