Skip to content
And make it sound alarming

Open source project curl is sick of users submitting “AI slop” vulnerabilities

"One way you can tell is it's always such a nice report," founder tells Ars.

Kevin Purdy | 132
A sysop knight defends his server kingdom from the onslaught of the AI hordes
Credit: Aurich Lawson | Getty Images
Credit: Aurich Lawson | Getty Images
Story text

"A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time," wrote Daniel Stenberg, original author and lead of the curl project, on LinkedIn this week.

Curl (cURL in some realms), which turned 25 years old in 2023, is an essential command-line tool and library for interacting with Internet resources. The open source project receives bug reports and security issues through many channels, including HackerOne, a reporting service that helps companies manage vulnerability reporting and bug bounties. HackerOne has fervently taken to AI tools in recent years. "One platform, dual force: Human minds + AI power," the firm's home page reads.

Stenberg, saying that he's "had it" and is "putting my foot down on this craziness," suggested that every suspected AI-generated HackerOne report will have its reporter asked to verify if they used AI to find the problem or generate the submission. If a report is deemed "AI slop," the reporter will be banned. "We still have not seen a single valid security report done with AI help," Stenberg wrote.

Answering unasked questions

One report from May 4 that Stenberg wrote "pushed me over the limit" suggested a "novel exploit leveraging stream dependency cycles in the HTTP/3 protocol stack." Stream dependency mishandling, where one aspect of a program waits for the output of another aspect, can lead to malicious data injection, race conditions and crashes, and other issues. The report in question suggests this could leave curl, which is HTTP/3-capable, vulnerable to exploits up to and including remote code execution.

But as curl staff point out, the "malicious server setup" patch file submitted did not apply to the latest versions of a Python tool in question. Asked about this, the original submitter responded in a strangely prompt-like fashion, answering questions not asked by curl staff ("What is a Cyclic Dependency?") and included what seem like basic instructions on how to use the git tool to apply a new patch. The submitter also did not provide the requested new patch file, cited functions that do not exist in the underlying libraries, and suggested hardening tactics for utilities other than curl. Curl coders eventually closed the report, but also made it public to serve as an example.

Alex Rice, co-founder, CTO, and CISO of HackerOne, said in a statement to Ars that reports containing "hallucinated vulnerabilities, vague or incorrect technical content, or other forms of low-effort noise" are treated as spam and subject to enforcement.

"We believe AI, when used responsibly, can be a powerful tool for researchers, enhancing productivity, scale, and impact," Rice said. "Innovation in this space is accelerating, and we support researchers who use AI to improve the quality and efficiency of their work. Overall, we're seeing an aggregate increase in report quality as AI helps researchers bring clarity to their work, especially where English is a second language."

"The key is ensuring that AI enhances the report rather than introducing noise," Rice said. "Our goal is to encourage innovation that drives better security outcomes, while holding all submissions to the same high standards."

“More tools to strike down this behavior”

In an interview with Ars, Stenberg said he was glad his post—which generated 200 comments and nearly 400 reposts as of Wednesday morning—was getting around. "I'm super happy that the issue [is getting] attention so that possibly we can do something about it [and] educate the audience that this is the state of things," Stenberg said. "LLMs cannot find security problems, at least not like they are being used here."

This week has seen four such misguided, obviously AI-generated vulnerability reports seemingly seeking either reputation or bug bounty funds, Stenberg said. "One way you can tell is it's always such a nice report. Friendly phrased, perfect English, polite, with nice bullet-points … an ordinary human never does it like that in their first writing," he said.

Some AI reports are easier to spot than others. One accidentally pasted their prompt into the report, Stenberg said, "and he ended it with, 'and make it sound alarming.'"

Stenberg said he had "talked to [HackerOne] before about this" and has reached out to the service this week. "I would like them to do something, something stronger, to act on this. I would like help from them to make the infrastructure around [AI tools] better and give us more tools to strike down this behavior," he said.

In the comments of his post, Stenberg, trading comments with Tobias Heldt of open source security firm XOR, suggested that bug bounty programs could potentially use "existing networks and infrastructure." Security reporters paying a bond to have a report reviewed "could be one way to filter signals and reduce noise," Heldt said. Elsewhere, Stenberg said that while AI reports are "not drowning us, [the] trend is not looking good."

Stenberg has previously blogged on his own site about AI-generated vulnerability reports, with more details on what they look like and what they get wrong. Seth Larson, security developer-in-residence at the Python Software Foundation, added to Stenberg's findings with his own examples and suggested actions, as noted by The Register.

"If this is happening to a handful of projects that I have visibility for, then I suspect that this is happening on a large scale to open source projects," Larson wrote in December. "This is a very concerning trend."

This post was updated at 3:45 p.m. to include comment from HackerOne.

Photo of Kevin Purdy
Kevin Purdy Senior Technology Reporter
Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.
132 Comments