Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Artificial Arms Race: What Can Automation and AI do to Advance Red Teams

The best Red Team engagements are a balanced mix of technology, tools and human operators.

While technology platforms have proven capable of simulating certain types of attacks and testing the efficacy of certain defensive products and measures, the gold standard for a security stress test is a human team of expert operators. Red Teams employ everything from social engineering to advanced exploits in concert to achieve an objective and assess organizational security. Recent data reveals that nearly two thirds of organizations are employing Red Teams in some capacity, with investment set to spike much higher. However, while there is no replacement for human ingenuity and agility, adversaries are constantly experimenting with greater automation and tooling, to attack and evade at scale. So, what is the current state-of-the-art in automation and learning technologies in cybersecurity and how can they add advantage to human operators?

Ready Team?

Before we dive into the evolution of Red Teams it would behoove us to take a look at the current state of Red Team programs, and to what organizations the journey, and this discussion, apply. Most Red Team engagements are complex, and for an immature security program the results can produce security and sensory overload. Security measures will most likely fail spectacularly, and the organization will not have the capacity to discern what failed and how to approach improvement. An immature organization would be better served with focused penetration testing of prioritized applications and/or environmental elements to eliminate vulnerability in a measured fashion.

So who is ready?  The threshold for Red Team Readiness is a well-defined and continuously maintained/tested security program that comprises:

  • A full organizational commitment, and ability to act on Red Team output
  • An information security governance framework
  • Clear, enforced security policies and proceduresA current and tested Incident response program
  • A comprehensive vulnerability management program

Today’s Red Team engagements – from social engineering, to attack simulation, to tabletop exercises – are robust and rigorous engagements that emulate the full range of activities a persistent attacker would likely undertake to achieve a goal, all driven by business specific scenarios. At the high end, this also includes full physical security assessments and even ransomware attack emulations. Some activities are so comprehensive that even with a formidable Red Team, some customer organizations present levels of scale in size and scenarios that challenge even the largest, most skilled teams to overcome. Enter technology.

“Avenging” Automation

As stated at the outset, the true strength and value in a Red Team engagement is the “Team.”  The best Red Team engagements are a balanced mix of technology, tools and human operators to not only cover the gamut of activities – some technology can’t do alone….yet (more on that later) — but also to validate both the severity of potential compromise, and the strength and integrity of organizational security controls.

When we use the word “automation” many think of simply removing menial tasks from human operators.  But done correctly, automation can provide much greater value.  In fact, it’s probably better to view it not as automation, but augmentation.  Think of putting a Red Team in an Iron Man suit (without Jarvis…for now) that improves speed, strength and even an ability for processes and capabilities running in parallel.

Now, before we get into augmenting Red Teams, one critical point in this discussion that separates true Red Teaming from Penetration Testing is the need to mirror the exact tactics, tools and procedures of real-world adversaries. For the emulation to be effective and a truly realistic representation of risks and threats, a Red Team should only be using the exact same tools and levels of automation being used by attackers. So, while there are many things that can be automated, depending on the engagement scenario, not everything will be automated.

Advertisement. Scroll to continue reading.

With that caveat in mind, some of the most common and most advantageous Red Team operations to benefit from automation include:

  • Asset discovery – mapping out all the systems, networks, and applications that represent the organizational attack surface, and the interconnections that can be leveraged in achieving the goals of engagement.
  • Open Source Intelligence (OSINT) gathering – collecting external information regarding systems, businesses processes and partners, or even personal information on executives and employees that could provide knowledge to leverage for deception or exploitation
  • Full Ransomware attack simulation – at the highest end of Red Team activities, safely executing and emulating the actual disruption caused by a real-world attack.

A “Vision” for Jarvis

So automation is more of an exoskeleton than full Iron Man in augmenting Red Team capabilities, but Artificial Intelligence (AI) holds the promise of bringing Jarvis into the equation. Surprising to no one, AI isn’t just chatting when it comes to security. AI is being used to help shorten the Red Team runway to actual attack emulation.  

Reflecting the qualification in the previous section regarding emulating the methods and tools of the adversary, while current AI use in actual Red Team deployment is limited, the criminals being emulated are already exploring its potential.  They are employing AI in phishing to localize language and grammar conventions in text, voice and video to minimize suspicion, and even leveraging deepfakes in vishing campaigns. The goal of course with any criminal enterprise is to maximize opportunity and ROI.

So where does AI in the Red Team playbook currently reside? The most common current uses of AI in Red Team activities include:

  • Editorial assistance in generating pretexting content and phishing emails
  • Custom exploit development support
  • Creating assets for a Red Team to project a familiar or trustworthy profile — from fake images, profiles and social media accounts to a false front digital infrastructure for fake companies and vendors.

And while humans for the foreseeable future will remain a critical piece in the Red Team loop to analyze, prioritize and ultimately pull the attack levers, it’s not hard to conceive of more advanced AI involvement on the horizon, including:

  • Integrated and adaptable deepfake campaigns that can respond to conversation
  • Self-service, automated OSINT collection and correlation based on customer input
  • Full self-service, business scenario-based attack simulations
  • Full artificial “sock puppet” personalities and alias accounts with history and a digital footprint for greater “authenticity.”
  • Adaptive attacks at scale to circumvent and overcome defensive system responses, such as email filtering and EDR.
Written By

Tom Eston is the VP of Consulting and Cosmos at Bishop Fox. Tom's work over his 15 years in cybersecurity has focused on application, network, and red team penetration testing as well as security and privacy advocacy. He has led multiple projects in the cybersecurity community, improved industry standard testing methodologies and is an experienced manager and leader. He is also the founder and co-host of the podcast The Shared Security Show; and a frequent speaker at user groups and international cybersecurity conferences.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Cybersecurity Funding

Network security provider Corsa Security last week announced that it has raised $10 million from Roadmap Capital. To date, the company has raised $50...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Network Security

Attack surface management is nothing short of a complete methodology for providing effective cybersecurity. It doesn’t seek to protect everything, but concentrates on areas...