Artificial Intelligence

5 Critical Steps to Prepare for AI-Powered Malware in Your Connected Asset Ecosystem

AI-powered attacks will become progressively more common, and a well-rounded security approach involves more than simply managing incidents effectively.

OK, so let’s get one thing straight at the outset. Are cybercriminals actively using AI to target you or your business right now? Very, very rarely.

Despite the hype and the seemingly endless stream of news stories that may have led you to believe that the opposite is already the case, actual hard evidence of criminal use of AI is as rare as the proverbial hobbyhorse-based fertilizer. Though it is impossible to deny that criminal interest abounds, they lack the toolset or skills to do something about it yet.

Over the course of 2023, there have been multiple Large Language Model (LLM)-based services advertised on cybercriminal forums and Telegram channels, pushing services with names such as WolfGPT, Evil GPT or DarkBARD. However, even these services, it seems, are either scams in themselves or simply short-lived wrapper services that sell access to a legitimate LLM through stolen credentials and jailbroken user prompts. The possible exception to all this was WormGPT, a service that was permanently shuttered after a mere two months when the creators had enough of the media exposure they garnered. Voice synthesis has already been used in a few fake kidnap extortion attempts and possibly in one or two Business Email Compromise attacks as well, but that’s about it.

So why talk about strategies and steps to prepare for AI-powered malware? Well, to quote Joseph Heller’s Catch-22, “Just because you’re paranoid doesn’t mean they aren’t after you”.

AI-powered malware (as opposed to AI-generated) represents a new frontier in the ever-expanding portfolio of malicious cyber capability. To me, this category will encompass a wide range of sophisticated techniques where artificial intelligence is utilized to enhance the effectiveness and stealth of malicious activities including:

Fake content generation

The capabilities offered by Generative Adversarial Networks (GAN) and LLMs will allow threat actors to create entirely fake, but legitimate looking image and video-based content for social media. This content, used in combination with LLM-enhanced messaging has the potential to convince the unwitting or the highly targeted to click on malicious links and to assist in further propagation through organic sharing.

Enhanced phishing lures

Advertisement. Scroll to continue reading.

AI-powered attacks go beyond traditional phishing methods. AI-powered tools make light work of the research and footprinting activities that were previously reserved for more sophisticated attacks. AI-enhanced phishing will be highly targeted and well. These emails will not only be more convincing, but threat actors will also be able to automate the fine-tuning of content and tactics in near real-time. This level of sophistication increases the likelihood of successful social engineering attacks and credential harvesting.

AI-Generated or Assisted Malware

In more advanced scenarios, AI will be directly involved in the development and execution of malware. While contemporary examples are rare, such the Black Mamba proof of concept from Hyas Labs, they do showcase the potential of AI to assist in crafting malware. In reality, the Black Mamba PoC offered little in the way of new functionality; abuse of legitimate sites for C2, polymorphism, and writing malicious code solely to memory are hardly innovative.

There is an argument to be made though that this approach was already hamstrung by restricting itself to human thought processes. Asking an AI to develop and idea formulated by a human fails to maximize the innovate potential of AI. Outside of this paradigm, the potential for the development of AI-assisted or AI-generated malware, that is not only evasive but can adapt its behavior based on the target environment, is real. This will pose a significant challenge for traditional cybersecurity measures, equally constrained by human understanding.

The potential threat isn’t limited to conventional computing systems. Internet of Things (IoT) and Operational Technology (OT) devices, integral components of many connected ecosystems, are increasingly targeted for sophisticated cyberattacks. AI-powered malware could just as easily exploit vulnerabilities in IoT devices, gaining unauthorized access to networks. Malicious actors can leverage AI to craft attacks tailored to the specific vulnerabilities of IoT devices, potentially causing disruptions or unauthorized access.

OT involved in industrial processes and critical infrastructure is equally at risk. AI-powered malware may target these systems, leading to disruptions in manufacturing, energy production, or even transportation. The ability of AI to analyze and adapt to intricate OT environments poses a unique challenge, overcoming the knowledge-gap that has for so long been a barrier to the widespread dissemination of attacks.

A comprehensive strategy that recognizes the distinct challenges posed by AI-powered malware in these environments is crucial to ensure the resilience and security of connected ecosystems in the future. Here are five critical steps to optimize defenses and prepare for the challenge.

Complete visibility

The ability to see and understand every connected asset in your environment is crucial. It enables the detection of anomalous behavior, identification of risk, and a swift response to potential threats. Effective security relies on the solid foundation of visibility. Know good; detect bad.

Continuous Risk Assessment

Traditional risk assessments are point-in-time evaluations, but as AI algorithms learn and adapt, the risks to a system will change dynamically. Continuous risk assessment means evaluating security posture in real time, identifying changes, anomalies, and emerging risk, and adapting defenses accordingly.

Minimize Attack Surface

AI-powered attacks will often exploit vulnerabilities in systems and processes. By minimizing the attack surface, organizations can significantly reduce the potential vectors for attack and make it more challenging for malicious actors to find and exploit weaknesses. This means not only securing unnecessary services, closing unused ports, and limiting user privileges, but also evaluating business processes that socially engineered attacks may seek to exploit.

Build a Defensible Environment

A defensible environment is one that is designed with security in mind from the ground up, or one that has been holistically re-evaluated from the same perspective. Strong authentication mechanisms, encryption of sensitive data, and properly segmented networks all help to mitigate and contain potential breaches. In the face of AI-powered attacks, a defensible environment means that even if one part of the system is compromised, the overall integrity of the network remains resilient, making it more challenging for attackers to move laterally and escalate privileges.

Manage, Automate, Monitor, Respond, Archive

AI-powered attacks will become progressively more common, and a well-rounded security approach involves more than simply managing incidents effectively. Automation helps in responding to threats at machine speed, and archiving data is crucial for post-incident analysis and further response. The knowledge that a specific asset has been compromised facilitates immediate proactive steps in securing all other devices that fit that same risk profile.

These five points represent an outline for effective preparation to defend against future AI-powered attacks. They emphasize proactive measures, adaptability, and a holistic approach to cybersecurity. As AI continues to evolve, a defensive strategy that is equally dynamic and adaptable is essential for organizations to stay ahead of emerging threats.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version