It’s possible for threat actors to manipulate artificial intelligence chatbots such as ChatGPT to help them distribute malicious code packages to software developers, according to vulnerability and risk management company Vulcan Cyber.
The issue is related to hallucinations, which occur when AI, specifically a large language model (LLM) such as ChatGPT, generates factually incorrect or nonsensical information that may look plausible.
In Vulcan’s analysis, the company’s researchers noticed that ChatGPT — possibly due to its use of older data for training — recommended code libraries that currently do not exist.
The researchers warned that threat actors could collect the names of such non-existent packages and create malicious versions that developers could download based on ChatGPT’s recommendations.
Specifically, Vulcan researchers analyzed popular questions on the Stack Overflow coding platform and asked ChatGPT those questions in the context of Python and Node.js.
ChatGPT was asked more than 400 questions and roughly 100 of its responses included references to at least one Python or Node.js package that does not actually exist. In total, ChatGPT’s responses mentioned more than 150 non-existent packages.
An attacker can collect the names of the packages recommended by ChatGPT and create malicious versions. Since the AI is likely to recommend the same packages to others asking similar questions, unsuspecting developers may look for and install the malicious version uploaded by the attacker to popular repositories.
Vulcan Cyber demonstrated how this method would work in the wild by creating a package that can steal system information from a device and uploading it to the NPM Registry.
“It can be difficult to tell if a package is malicious if the threat actor effectively obfuscates their work, or uses additional techniques such as making a trojan package that is actually functional,” the company explained.
“Given how these actors pull off supply chain attacks by deploying malicious libraries to known repositories, it’s important for developers to vet the libraries they use to make sure they are legitimate. This is even more important with suggestions from tools like ChatGPT which may recommend packages that don’t actually exist, or didn’t before a threat actor created them,” it added.
Related: Malicious Prompt Engineering With ChatGPT
Related: ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence
Related: Vulnerability Could Have Been Exploited for ‘Unlimited’ Free Credit on OpenAI Accounts

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.
More from Eduard Kovacs
- CISA Warns of Old JBoss RichFaces Vulnerability Being Exploited in Attacks
- NIST Publishes Final Version of 800-82r3 OT Security Guide
- Johnson Controls Hit by Ransomware
- Verisoul Raises $3.25 Million in Seed Funding to Detect Fake Users
- Government Shutdown Could Bench 80% of CISA Staff
- Google Rushes to Patch New Zero-Day Exploited by Spyware Vendor
- macOS 14 Sonoma Patches 60 Vulnerabilities
- New GPU Side-Channel Attack Allows Malicious Websites to Steal Data
Latest News
- Bankrupt IronNet Shuts Down Operations
- AWS Using MadPot Decoy System to Disrupt APTs, Botnets
- Generative AI Startup Nexusflow Raises $10.6 Million
- In Other News: RSA Encryption Attack, Meta AI Privacy, ShinyHunters Hacker Guilty Plea
- Researchers Extract Sounds From Still Images on Smartphone Cameras
- National Security Agency is Starting an Artificial Intelligence Security Center
- CISA Warns of Old JBoss RichFaces Vulnerability Being Exploited in Attacks
- Hackers Set Sights on Apache NiFi Flaw That Exposes Many Organizations to Attacks
