TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
AI / Large Language Models / Security

The Security Risks of Generative AI Package Hallucinations

Threat actors can use non-existent software packages that models return in responses to drop their malicious codes.
Apr 4th, 2024 8:21am by
Featued image for: The Security Risks of Generative AI Package Hallucinations

The skyrocketing innovation and adoption of generative AI that took off almost immediately after OpenAI released its ChatGPT chatbot was followed closely by concerns about the security around the emerging technology.

The worries stretch from what large language models (LLMs) can do to how they can be manipulated and abused. There is the risk of sensitive corporate or personal data leaking from generative AI systems, threat groups using the technology to create more convincing phishing emails, jailbreaking models to work around established rules, and prompt injection attacks, where a bad actor uses malicious inputs to manipulate LLMs into executing their attack.

Seven months ago, Bar Lanyado, at the time a security researcher at Vulcan Cyber, wrote of another vulnerability that could pose a threat to developers if exploited: AI package hallucinations.

ChatGPT, Google’s Gemini, and similar chatbots can respond to prompts with references, URLs, and other information that aren’t accurate or true — also called hallucinations. It’s the same story with code libraries — sometimes the package returned in the response isn’t real.

And that’s where the danger lies, Lanyado wrote in his June 2023 report. He found that an attacker could pose questions developers were likely to ask to ChatGPT running atop OpenAI’s GPT-3.5 Turbo LLM and that the chatbot would return a list of software packages — some of which didn’t exist — on repositories such as GitHub. For almost 30% of his questions, the models recommended at least one hallucinated package that threat actors could use in attacks.

“When the attacker finds a recommendation for an unpublished package, they can publish their own malicious package in its place,” he wrote at the time. “The next time a user asks a similar question they may receive a recommendation from ChatGPT to use the now-existing malicious package.”

A New Way to Spread Malicious Packages

This technique would give bad actors another method — along with those like typosquatting, dependency confusion, and a trojan package — for spreading malicious packages. Those established methods are known and detectable, but as more developers shift from searching online to asking ChatGPT when looking for code libraries, the threat of AI hallucinations rises.

“If this attack is successful, a developer might follow the instructions of the model, and download and run a package on their computer,” Lanyado, now a security researcher at Lasso Security, told The New Stack. “At this point, the attacker could have a foothold on the developer’s station. It is very hard to detect at this stage as it is pre-deployment and CI/CD security tools. In the worst-case scenario, the malicious package will find its way to production servers, and the attacker will have control of them, too.”

Lanyado on last week released a follow-up report on AI package hallucinations to see whether vendors had responded to what he found last year, if the same package hallucination appears in different LLMs, and whether he could simulate an attack in the wild.

More Models, More Languages

In his latest study, he scaled up his work. In the first report, he used the technique on GPT-3.5 Turbo and asked 457 “how to” questions about 40 subjects in two programming languages, Node.js and Python. This time he used multiple times more questions about 100 subjects in five languages — Python, Node.js, Go, .NET, and Ruby — and asked four models through their APIs, OpenAI’s GPT-3.5 Turbo and GPT-4, Google’s Gemini Pro (formerly Bard), and Cohere’s Coral. Lanyado also used the LangChain framework for his interactions with the models, keeping the default prompt which tells the model to say if it doesn’t know the answer, he expected would make it more difficult for the models to offer hallucinations.

GPT-3.5 Turbo and GPT-4 had the best results, with 22.2% and 24.2%, respectively, of the results being hallucinations. With Cohere, the number was 29.1% and 64.5% for Gemini Pro.

Regarding the languages, there were many hallucinations with Go and .NET, though not all of them could be exploited by cybercriminals.

“Not all risks are born equal,” Lanyado said. “The fact that some of the languages are less prone to this kind of attack — or at least the option to exploit it — like Go is meaningful. The risk is different for developers using different models and code assistants, but also different programming languages.”

There also were thousands of AI package hallucinations found in multiple models, with 2,253 found in both GPT-3.5 Turbo and Gemini Pro, 1,449 between GPT-4 and Gemini Pro, and 1,400 between Cohere and Gemini Pro. He noted that no model was immune to this crossover, adding that “the fact that the same hallucinated packages return in several models, increases the probability that a developer will use it. We can assume that some packages are at higher risk than others.”

Trying It in the Wild

The testing also turned up a hallucinated package called “huggingface-cli,” which Lanyado used to upload an empty package by the same name. He also uploaded a dummy package called “blabladsa123” to determine how many of the downloads were scanners. Over three months, the fake, empty huggingface-cli package was uploaded more than 15,000 times, “a very, very strong indication that this is a real problem that can and probably already affected numerous organizations,” Lanyado said.

Even more, a search on GitHub found that the package was used within other companies’ repositories and that several large organizations either use or recommend the package in their repositories.

“For instance, instructions for installing the package can be found in the README of a repository dedicated to research conducted by Alibaba,” he wrote in the report.

Threat Actors Are Eyeing AI Models

A concern is that threat actors are showing a strong interest in targeting generative AI models and are using various tactics and methods to attack through open source models, data source poisoning, and chatbots. For an experienced hacker, using AI package hallucinations is an easy attack to run.

“Trying these models manually via their interface — even in their free version — can give strong intuition about hallucinated packages that occur often,” Lanyado said. “Automating this process and using the APIs is both cheap and easy, so advanced attackers can get a wide picture of the attack surface. After detecting the specific use-cases and hallucinated packages the attacker wishes to exploit, uploading them to the repositories is a matter of minutes.”

He advised developers to be careful when relying on LLMs and verify the accuracy of an answer if it doesn’t seem right, particularly if it concerns software packages. Also, be cautious when using open source software. If a developer sees an unfamiliar package, gather information about it from its repository, evaluate the size of its community, check its maintenance record and known vulnerabilities, and the engagement it gets.

Also, check the date it was published, be aware of anything suspicious, and run a comprehensive security scan before integrating the package into a production environment.

Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.