Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

5 Critical Steps to Prepare for AI-Powered Malware in Your Connected Asset Ecosystem

AI-powered attacks will become progressively more common, and a well-rounded security approach involves more than simply managing incidents effectively.

Threat Intelligence Report

OK, so let’s get one thing straight at the outset. Are cybercriminals actively using AI to target you or your business right now? Very, very rarely.

Despite the hype and the seemingly endless stream of news stories that may have led you to believe that the opposite is already the case, actual hard evidence of criminal use of AI is as rare as the proverbial hobbyhorse-based fertilizer. Though it is impossible to deny that criminal interest abounds, they lack the toolset or skills to do something about it yet.

Over the course of 2023, there have been multiple Large Language Model (LLM)-based services advertised on cybercriminal forums and Telegram channels, pushing services with names such as WolfGPT, Evil GPT or DarkBARD. However, even these services, it seems, are either scams in themselves or simply short-lived wrapper services that sell access to a legitimate LLM through stolen credentials and jailbroken user prompts. The possible exception to all this was WormGPT, a service that was permanently shuttered after a mere two months when the creators had enough of the media exposure they garnered. Voice synthesis has already been used in a few fake kidnap extortion attempts and possibly in one or two Business Email Compromise attacks as well, but that’s about it.

So why talk about strategies and steps to prepare for AI-powered malware? Well, to quote Joseph Heller’s Catch-22, “Just because you’re paranoid doesn’t mean they aren’t after you”.

AI-powered malware (as opposed to AI-generated) represents a new frontier in the ever-expanding portfolio of malicious cyber capability. To me, this category will encompass a wide range of sophisticated techniques where artificial intelligence is utilized to enhance the effectiveness and stealth of malicious activities including:

Fake content generation

The capabilities offered by Generative Adversarial Networks (GAN) and LLMs will allow threat actors to create entirely fake, but legitimate looking image and video-based content for social media. This content, used in combination with LLM-enhanced messaging has the potential to convince the unwitting or the highly targeted to click on malicious links and to assist in further propagation through organic sharing.

Enhanced phishing lures

Advertisement. Scroll to continue reading.

AI-powered attacks go beyond traditional phishing methods. AI-powered tools make light work of the research and footprinting activities that were previously reserved for more sophisticated attacks. AI-enhanced phishing will be highly targeted and well. These emails will not only be more convincing, but threat actors will also be able to automate the fine-tuning of content and tactics in near real-time. This level of sophistication increases the likelihood of successful social engineering attacks and credential harvesting.

AI-Generated or Assisted Malware

In more advanced scenarios, AI will be directly involved in the development and execution of malware. While contemporary examples are rare, such the Black Mamba proof of concept from Hyas Labs, they do showcase the potential of AI to assist in crafting malware. In reality, the Black Mamba PoC offered little in the way of new functionality; abuse of legitimate sites for C2, polymorphism, and writing malicious code solely to memory are hardly innovative.

There is an argument to be made though that this approach was already hamstrung by restricting itself to human thought processes. Asking an AI to develop and idea formulated by a human fails to maximize the innovate potential of AI. Outside of this paradigm, the potential for the development of AI-assisted or AI-generated malware, that is not only evasive but can adapt its behavior based on the target environment, is real. This will pose a significant challenge for traditional cybersecurity measures, equally constrained by human understanding.

The potential threat isn’t limited to conventional computing systems. Internet of Things (IoT) and Operational Technology (OT) devices, integral components of many connected ecosystems, are increasingly targeted for sophisticated cyberattacks. AI-powered malware could just as easily exploit vulnerabilities in IoT devices, gaining unauthorized access to networks. Malicious actors can leverage AI to craft attacks tailored to the specific vulnerabilities of IoT devices, potentially causing disruptions or unauthorized access.

OT involved in industrial processes and critical infrastructure is equally at risk. AI-powered malware may target these systems, leading to disruptions in manufacturing, energy production, or even transportation. The ability of AI to analyze and adapt to intricate OT environments poses a unique challenge, overcoming the knowledge-gap that has for so long been a barrier to the widespread dissemination of attacks.

A comprehensive strategy that recognizes the distinct challenges posed by AI-powered malware in these environments is crucial to ensure the resilience and security of connected ecosystems in the future. Here are five critical steps to optimize defenses and prepare for the challenge.

Complete visibility

The ability to see and understand every connected asset in your environment is crucial. It enables the detection of anomalous behavior, identification of risk, and a swift response to potential threats. Effective security relies on the solid foundation of visibility. Know good; detect bad.

Continuous Risk Assessment

Traditional risk assessments are point-in-time evaluations, but as AI algorithms learn and adapt, the risks to a system will change dynamically. Continuous risk assessment means evaluating security posture in real time, identifying changes, anomalies, and emerging risk, and adapting defenses accordingly.

Minimize Attack Surface

AI-powered attacks will often exploit vulnerabilities in systems and processes. By minimizing the attack surface, organizations can significantly reduce the potential vectors for attack and make it more challenging for malicious actors to find and exploit weaknesses. This means not only securing unnecessary services, closing unused ports, and limiting user privileges, but also evaluating business processes that socially engineered attacks may seek to exploit.

Build a Defensible Environment

A defensible environment is one that is designed with security in mind from the ground up, or one that has been holistically re-evaluated from the same perspective. Strong authentication mechanisms, encryption of sensitive data, and properly segmented networks all help to mitigate and contain potential breaches. In the face of AI-powered attacks, a defensible environment means that even if one part of the system is compromised, the overall integrity of the network remains resilient, making it more challenging for attackers to move laterally and escalate privileges.

Manage, Automate, Monitor, Respond, Archive

AI-powered attacks will become progressively more common, and a well-rounded security approach involves more than simply managing incidents effectively. Automation helps in responding to threats at machine speed, and archiving data is crucial for post-incident analysis and further response. The knowledge that a specific asset has been compromised facilitates immediate proactive steps in securing all other devices that fit that same risk profile.

These five points represent an outline for effective preparation to defend against future AI-powered attacks. They emphasize proactive measures, adaptability, and a holistic approach to cybersecurity. As AI continues to evolve, a defensive strategy that is equally dynamic and adaptable is essential for organizations to stay ahead of emerging threats.

Written By

Rik Ferguson is the Vice President of Security Intelligence at Forescout. He is also a Special Advisor to Europol’s European Cyber Crime Centre (EC3), a multi-award-winning producer and writer, and a Fellow of the Royal Society of Arts. Prior to joining Forescout in 2022, Rik served as Vice President Security Research at Trend Micro for 15 years. He holds a Bachelor of Arts degree from the University of Wales and has qualified as a Certified Ethical Hacker (C|EH), Certified Information Systems Security Professional (CISSP) and an Information Systems Security Architecture Professional (ISSAP).

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Malware & Threats

The NSA and FBI warn that a Chinese state-sponsored APT called BlackTech is hacking into network edge devices and using firmware implants to silently...

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.