Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages

Researchers show how ChatGPT/AI hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers.

It’s possible for threat actors to manipulate artificial intelligence chatbots such as ChatGPT to help them distribute malicious code packages to software developers, according to vulnerability and risk management company Vulcan Cyber. 

The issue is related to hallucinations, which occur when AI, specifically a large language model (LLM) such as ChatGPT, generates factually incorrect or nonsensical information that may look plausible. 

In Vulcan’s analysis, the company’s researchers noticed that ChatGPT — possibly due to its use of older data for training — recommended code libraries that currently do not exist. 

The researchers warned that threat actors could collect the names of such non-existent packages and create malicious versions that developers could download based on ChatGPT’s recommendations.

Specifically, Vulcan researchers analyzed popular questions on the Stack Overflow coding platform and asked ChatGPT those questions in the context of Python and Node.js. 

ChatGPT was asked more than 400 questions and roughly 100 of its responses included references to at least one Python or Node.js package that does not actually exist. In total, ChatGPT’s responses mentioned more than 150 non-existent packages.

An attacker can collect the names of the packages recommended by ChatGPT and create malicious versions. Since the AI is likely to recommend the same packages to others asking similar questions, unsuspecting developers may look for and install the malicious version uploaded by the attacker to popular repositories. 

Advertisement. Scroll to continue reading.

Vulcan Cyber demonstrated how this method would work in the wild by creating a package that can steal system information from a device and uploading it to the NPM Registry.  

“It can be difficult to tell if a package is malicious if the threat actor effectively obfuscates their work, or uses additional techniques such as making a trojan package that is actually functional,” the company explained. 

“Given how these actors pull off supply chain attacks by deploying malicious libraries to known repositories, it’s important for developers to vet the libraries they use to make sure they are legitimate. This is even more important with suggestions from tools like ChatGPT which may recommend packages that don’t actually exist, or didn’t before a threat actor created them,” it added.

Related: Malicious Prompt Engineering With ChatGPT

Related: ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence

Related: Vulnerability Could Have Been Exploited for ‘Unlimited’ Free Credit on OpenAI Accounts

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join security experts as they discuss ZTNA’s untapped potential to both reduce cyber risk and empower the business.

Register

Join Microsoft and Finite State for a webinar that will introduce a new strategy for securing the software supply chain.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...