Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

AI Hallucinated Packages Fool Unsuspecting Developers

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Software developers relying on AI chatbots for building applications may end up using hallucinated packages, according to a new report from generative AI security startup Lasso Security.

Continuing research from last year, Lasso’s Bar Lanyado demonstrated once again how large language model (LLM) tools can be used to spread software packages that do not exist.

Threat actors, he warned last year, could learn the names of these hallucinated packages and create malicious ones with the same names that could be downloaded based on recommendations made by AI chatbots.

Scaling up the research, Lanyado asked four different models, namely GPT-3.5-Turbo, GPT-4, Gemini Pro (previously Bard), and Coral (Cohere), over 40,000 “how to” questions, using the Langchain framework for interaction.

To check the repetitiveness of hallucinations, the researcher used 20 questions with zero-shot hallucinations (the model recommended a hallucinated package in the first answer).

All chatbots delivered more than 20% of hallucinations, with Gemini peaking at 64.5% of hallucinations. The repetitiveness was of around 15%, with Cohere peaking at 24.2%.

The most worrying aspect of the research, however, is the fact that the researcher uploaded an empty package that has been downloaded over 30,000 times, based on AI recommendations. Furthermore, the same package was found to be used or recommended by several large companies.

“For instance, instructions for installing this package can be found in the README of a repository dedicated to research conducted by Alibaba,” the researcher explains.

Advertisement. Scroll to continue reading.

This research, Lanyado notes, underlines once against the need for cross-verification when receiving uncertain answers from an LLM, especially regarding software packages.

Lanyado also advises developers to be cautious when relying on open source software, especially when encountering unfamiliar packages, urging them to verify that package’s repository and evaluate its community and engagement before using it.

“Also, consider the date it was published and be on the lookout for anything that appears suspicious. Before integrating the package into a production environment, it’s prudent to perform a comprehensive security scan,” Lanyado notes.

Related: ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages

Related: Suspicious NuGet Package Harvesting Information From Industrial Systems

Related: Thousands of Code Packages Vulnerable to Repojacking Attacks

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.