Artificial Intelligence

Artificial Arms Race: What Can Automation and AI do to Advance Red Teams

The best Red Team engagements are a balanced mix of technology, tools and human operators.

The best Red Team engagements are a balanced mix of technology, tools and human operators.

While technology platforms have proven capable of simulating certain types of attacks and testing the efficacy of certain defensive products and measures, the gold standard for a security stress test is a human team of expert operators. Red Teams employ everything from social engineering to advanced exploits in concert to achieve an objective and assess organizational security. Recent data reveals that nearly two thirds of organizations are employing Red Teams in some capacity, with investment set to spike much higher. However, while there is no replacement for human ingenuity and agility, adversaries are constantly experimenting with greater automation and tooling, to attack and evade at scale. So, what is the current state-of-the-art in automation and learning technologies in cybersecurity and how can they add advantage to human operators?

Ready Team?

Before we dive into the evolution of Red Teams it would behoove us to take a look at the current state of Red Team programs, and to what organizations the journey, and this discussion, apply. Most Red Team engagements are complex, and for an immature security program the results can produce security and sensory overload. Security measures will most likely fail spectacularly, and the organization will not have the capacity to discern what failed and how to approach improvement. An immature organization would be better served with focused penetration testing of prioritized applications and/or environmental elements to eliminate vulnerability in a measured fashion.

So who is ready?  The threshold for Red Team Readiness is a well-defined and continuously maintained/tested security program that comprises:

  • A full organizational commitment, and ability to act on Red Team output
  • An information security governance framework
  • Clear, enforced security policies and proceduresA current and tested Incident response program
  • A comprehensive vulnerability management program

Today’s Red Team engagements – from social engineering, to attack simulation, to tabletop exercises – are robust and rigorous engagements that emulate the full range of activities a persistent attacker would likely undertake to achieve a goal, all driven by business specific scenarios. At the high end, this also includes full physical security assessments and even ransomware attack emulations. Some activities are so comprehensive that even with a formidable Red Team, some customer organizations present levels of scale in size and scenarios that challenge even the largest, most skilled teams to overcome. Enter technology.

“Avenging” Automation

As stated at the outset, the true strength and value in a Red Team engagement is the “Team.”  The best Red Team engagements are a balanced mix of technology, tools and human operators to not only cover the gamut of activities – some technology can’t do alone….yet (more on that later) — but also to validate both the severity of potential compromise, and the strength and integrity of organizational security controls.

When we use the word “automation” many think of simply removing menial tasks from human operators.  But done correctly, automation can provide much greater value.  In fact, it’s probably better to view it not as automation, but augmentation.  Think of putting a Red Team in an Iron Man suit (without Jarvis…for now) that improves speed, strength and even an ability for processes and capabilities running in parallel.

Now, before we get into augmenting Red Teams, one critical point in this discussion that separates true Red Teaming from Penetration Testing is the need to mirror the exact tactics, tools and procedures of real-world adversaries. For the emulation to be effective and a truly realistic representation of risks and threats, a Red Team should only be using the exact same tools and levels of automation being used by attackers. So, while there are many things that can be automated, depending on the engagement scenario, not everything will be automated.

Advertisement. Scroll to continue reading.

With that caveat in mind, some of the most common and most advantageous Red Team operations to benefit from automation include:

  • Asset discovery – mapping out all the systems, networks, and applications that represent the organizational attack surface, and the interconnections that can be leveraged in achieving the goals of engagement.
  • Open Source Intelligence (OSINT) gathering – collecting external information regarding systems, businesses processes and partners, or even personal information on executives and employees that could provide knowledge to leverage for deception or exploitation
  • Full Ransomware attack simulation – at the highest end of Red Team activities, safely executing and emulating the actual disruption caused by a real-world attack.

A “Vision” for Jarvis

So automation is more of an exoskeleton than full Iron Man in augmenting Red Team capabilities, but Artificial Intelligence (AI) holds the promise of bringing Jarvis into the equation. Surprising to no one, AI isn’t just chatting when it comes to security. AI is being used to help shorten the Red Team runway to actual attack emulation.  

Reflecting the qualification in the previous section regarding emulating the methods and tools of the adversary, while current AI use in actual Red Team deployment is limited, the criminals being emulated are already exploring its potential.  They are employing AI in phishing to localize language and grammar conventions in text, voice and video to minimize suspicion, and even leveraging deepfakes in vishing campaigns. The goal of course with any criminal enterprise is to maximize opportunity and ROI.

So where does AI in the Red Team playbook currently reside? The most common current uses of AI in Red Team activities include:

  • Editorial assistance in generating pretexting content and phishing emails
  • Custom exploit development support
  • Creating assets for a Red Team to project a familiar or trustworthy profile — from fake images, profiles and social media accounts to a false front digital infrastructure for fake companies and vendors.

And while humans for the foreseeable future will remain a critical piece in the Red Team loop to analyze, prioritize and ultimately pull the attack levers, it’s not hard to conceive of more advanced AI involvement on the horizon, including:

  • Integrated and adaptable deepfake campaigns that can respond to conversation
  • Self-service, automated OSINT collection and correlation based on customer input
  • Full self-service, business scenario-based attack simulations
  • Full artificial “sock puppet” personalities and alias accounts with history and a digital footprint for greater “authenticity.”
  • Adaptive attacks at scale to circumvent and overcome defensive system responses, such as email filtering and EDR.

Related Content

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Artificial Intelligence

Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to...

Funding/M&A

Irish startup Tines raises $50 million in new venture capital funding as investors make big bets on automation and orchestration startups.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version