Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Malware Now Uses AI During Execution to Mutate and Collect Data, Google Warns

Google has released a report describing the novel ways in which malware has been using AI to adapt and evade detection.

AI hack

Google’s Threat Intelligence Group (GTIG) has seen several new and interesting ways in which malware has been leveraging artificial intelligence, going beyond its use for productivity gains.

For some time now cybercriminals and state-sponsored threat actors have been leveraging AI to develop and enhance malware, plan attacks, and create social engineering lures.

The cybersecurity industry has also observed and demonstrated the potential for malware to utilize AI during execution.

For instance, the PromptLock ransomware, which made headlines a few months ago over its use of AI to generate scripts on the fly and perform various actions on compromised systems, is an experimental proof-of-concept developed by researchers. 

However, Google researchers have come across several other pieces of malware that use AI during an attack. While some of them have been described as “experimental threats”, such as PromptLock, others have been used in the wild.

Another experimental AI-powered malware seen by Google is PromptFlux, a dropper that can “regenerate” itself by rewriting its code and saving the new version in the Startup folder for persistence.  

Advertisement. Scroll to continue reading.

“PromptFlux is written in VBScript and interacts with Gemini’s API to request specific VBScript obfuscation and evasion techniques to facilitate ‘just-in-time’ self-modification, likely to evade static signature-based detection,” GTIG researchers explained. 

One of the pieces of malware seen in the wild is FruitShell, a reverse shell written in PowerShell that enables arbitrary command execution on compromised systems. The malware includes hardcoded AI prompts designed to bypass detection and analysis by AI-powered security solutions. 

Another malware family highlighted by GTIG is PromptSteal, a Python-based data miner that leverages the Hugging Face API to query the Qwen2.5-Coder-32B-Instruct LLM in order to generate one-line Windows commands for collecting system data and documents from specific folders.

The last example highlighted by Google is QuietVault, a credential stealer developed in JavaScript designed to collect NPM and GitHub tokens. The malware uses an AI prompt and AI command-line interface tools installed on the compromised host to look for other secrets on the system.

“While still nascent, this represents a significant step toward more autonomous and adaptive malware,” GTIG researchers said, later adding, “We are only now starting to see this type of activity, but expect it to increase in the future.”

Google’s report also describes other aspects related to the use of AI by threat actors. The tech giant has seen how threat actors are using prompts that can be described as ’social engineering’ to bypass AI guardrails. 

The company also warns that the underground marketplace for AI tools is maturing. Its researchers have seen multifunctional tools designed for malware development, phishing, and vulnerability research.

“While adversaries are certainly trying to use mainstream AI platforms, guardrails have driven many to models available in the criminal underground,” explained Billy Leonard, tech lead at Google Threat Intelligence Group. “Those tools are unrestricted, and can offer a significant advantage to the less advanced. There are several of these available now, and we expect they will lower the barrier to entry for many criminals.”

In addition, nation-state actors linked to China, Iran and North Korea have continued to use Google’s Gemini to enhance reconnaissance, data exfiltration, command and control systems, and other components of their operations. 

Related: How Software Development Teams Can Securely and Ethically Deploy AI Tools

Related: Claude AI APIs Can Be Abused for Data Exfiltration

Related: AI Sidebar Spoofing Puts ChatGPT Atlas, Perplexity Comet and Other Browsers at Risk

Written By

Eduard Kovacs (@EduardKovacs) is senior managing editor at SecurityWeek. He worked as a high school IT teacher before starting a career in journalism in 2011. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

In cyber-physical systems (CPS), just one hour of downtime can outweigh an entire annual security budget. Learn how to master the Return on Security Investment (ROSI) to align security goals with the bottom-line priorities.

Register

Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization.

Register

People on the Move

Malwarebytes has named Chung Ip as Chief Financial Officer.

Semperis has appointed John Podboy as Chief Information Security Officer.

Randy Menon has become Chief Product and Marketing Officer at One Identity.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.