Artificial Intelligence

Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting

Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

ChatGPT attack

Microsoft’s threat intelligence team has captured evidence of foreign government-backed hacking teams interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

In a research report published Wednesday, Microsoft said it partnered with OpenAI to study the use of LLMs by malicious actors and found multiple known APTs experimenting with the popular ChatGPT tool to learn about potential victims, improve malware scripting tasks and pick apart public security advisories.

While the research did not identify significant attacks employing the monitored LLMs, Microsoft said it caught hacking teams from Russia, China, North Korea and Iran using LLMs in active APT operations.

In one case, Redmond’s threat hunters saw the Russian APT known as Forest Blizzard (APT28/FancyBear) using LLMs to conduct research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations. 

In another case, Microsoft said it caught notorious North Korean APT Emerald Sleet (aka Kimsuky) using LLMs to generate content likely to be used in spear-phishing campaigns.  In addition, the Pyongyang hackers were caught using LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies. 

“ Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine,” Microsoft said.

Redmond also found evidence of APT groups using the generative-AI technology to ”better understand publicly reported vulnerabilities, such as the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability (known as “Follina”).” 

In all observed cases, Microsoft said it worked with OpenAI to disable all accounts and assets associated with the advanced threat actors.

Advertisement. Scroll to continue reading.

Related: OpenAI Turns to Security to Sell ChatGPT Enterprise

Related: Microsoft Confirms Windows Exploits Bypassing Security Features

Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity

Related: Aim Security Raises $10M to Tackle Shadow AI

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

CISO Strategy

Microsoft security chief Charlie Bell pledges significant reforms and a strategic shift to prioritize security above all other product features.

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Malware & Threats

Russia-linked APT28 deploys the GooseEgg post-exploitation tool against numerous US and European organizations.

Data Breaches

The US government says Midnight Blizzard’s compromise of Microsoft corporate email accounts "presents a grave and unacceptable risk to federal agencies."

Cloud Security

Patch Tuesday: Microsoft warns that unauthenticated hackers can take complete control of Azure Kubernetes clusters.

Cloud Security

News analysis: SecurityWeek editor-at-large Ryan Naraine reads the CSRB report on China's audacious Microsoft’s Exchange Online hack and isn't at all surprised by the findings.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version