Connect with us

Hi, what are you looking for?


Artificial Intelligence

Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting

Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

ChatGPT attack

Microsoft’s threat intelligence team has captured evidence of foreign government-backed hacking teams interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

In a research report published Wednesday, Microsoft said it partnered with OpenAI to study the use of LLMs by malicious actors and found multiple known APTs experimenting with the popular ChatGPT tool to learn about potential victims, improve malware scripting tasks and pick apart public security advisories.

While the research did not identify significant attacks employing the monitored LLMs, Microsoft said it caught hacking teams from Russia, China, North Korea and Iran using LLMs in active APT operations.

In one case, Redmond’s threat hunters saw the Russian APT known as Forest Blizzard (APT28/FancyBear) using LLMs to conduct research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations. 

In another case, Microsoft said it caught notorious North Korean APT Emerald Sleet (aka Kimsuky) using LLMs to generate content likely to be used in spear-phishing campaigns.  In addition, the Pyongyang hackers were caught using LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies. 

“ Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine,” Microsoft said.

Redmond also found evidence of APT groups using the generative-AI technology to ”better understand publicly reported vulnerabilities, such as the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability (known as “Follina”).” 

In all observed cases, Microsoft said it worked with OpenAI to disable all accounts and assets associated with the advanced threat actors.

Advertisement. Scroll to continue reading.

Related: OpenAI Turns to Security to Sell ChatGPT Enterprise

Related: Microsoft Confirms Windows Exploits Bypassing Security Features

Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity

Related: Aim Security Raises $10M to Tackle Shadow AI

Written By

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. He is a security community engagement expert who has built programs at major global brands, including Intel Corp., Bishop Fox and GReAT. Ryan is a founding-director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world.


Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.


SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.


People on the Move

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

Shaun Khalfan has joined payments giant PayPal as SVP, CISO.

More People On The Move

Expert Insights

Related Content


A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...


The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Malware & Threats

The NSA and FBI warn that a Chinese state-sponsored APT called BlackTech is hacking into network edge devices and using firmware implants to silently...

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.