Curl Fights a Flood of AI-Generated Bug Reports From HackerOne

Earlier this month, Curl maintainer Daniel Stenberg complained on LinkedIn about a flood of “AI slop” bug reports that had been coming in. “That’s it. I’ve had it,” he wrote. “I’m putting my foot down on this craziness.”
The project was “effectively being DDoSed,” he wrote. And the culprit was volunteers for the bug bounty site HackerOne.
Stenberg’s LinkedIn post drew over 250 comments — and over 600 reposts. The incident kicked off a larger discussion around the web about the AI-powered era we’ve stumbled into — and how exactly we should be responding to AI-assisted humans.
So while Stenberg still appreciates the crowd-sourced security reports from HackerOne, he’s made it clear that he hopes to see some changes going forward — both small changes and big ones.
Outcomes Not Origins
Two things happened last week. On Friday, May 16, HackerOne Co-founder/CTO/CISO Alex Rice reiterated its stance that HackerOne’s Code of Conduct “does not prohibit the use of AI to assist in writing reports” — but it does prohibit spam. “Reports that contain hallucinated vulnerabilities, vague or incorrect technical content, or other forms of low-effort noise will be strictly treated as spam and result in enforcement actions under our Code of Conduct.”
In response to questions from The New Stack, Rice stressed that reports need to be “clear, accurate, and actionable” — but that HackerOne remains officially focused on outcomes, not origins. “We believe AI, when used responsibly, can be a powerful tool for researchers, enhancing productivity, scale, and impact. Innovation in this space is accelerating, and we support researchers who use AI to improve the quality and efficiency of their work.
“On the question of quantity, we’re actually seeing an aggregate increase in report quality as AI helps researchers bring clarity to their work,” Rice added. “While we have not surfaced widespread evidence of AI-generated hallucinations, these reports can be frustratingly difficult to validate and therefore is a concern we are actively addressing.”
Stenberg isn’t opposed to AI that identifies vulnerabilities for Curl. “I am convinced there will pop up tools using AI for this purpose that actually work (better) in the future, at least part of the time,” he wrote in early 2024, “so I cannot and will not say that AI for finding security problems is necessarily always a bad idea.
“I do, however, suspect that if you just add an ever-so-tiny (intelligent) human check to the mix, the use and outcome of any such tools will become so much better. I suspect that will be true for a long time into the future as well.”
This month, Stenberg said on LinkedIn that, “We still have not seen a single valid security report done with AI help.” But it’s possible AI-generated reports may have helped other projects. On LinkedIn, Stenberg had identified the “AI slop” report that “really pushed me over the limit.” But software engineer Randy Clinton pointed out that the same reporter seemed to have already earned over $1,900 in bug bounties from other HackerOne participants, including Adobe and Starbucks.
In his comments to The New Stack, Rice added his perspective. “Overall, we’re seeing an aggregate increase in report quality as AI helps researchers bring clarity to their work, especially where English is a second language… The key is ensuring that AI enhances the report rather than introducing noise.”
In short, he said, the goal of HackerOne “is to encourage innovation that drives better security outcomes, while holding all submissions to the same high standards.”
Stenberg Takes Action
Meanwhile, on Thursday, May 15, Stenberg announced Curl’s new guidelines for AI usage by contributors, making clear exactly what he’d like to see in the future. “If you asked an AI tool to find problems in curl, you must make sure to reveal this fact in your report.” (And answering “yes” guarantees a “stream” of follow-up questions to prove there’s some actual human intelligence on the other side of the report, Stenberg warned on Mastodon.)
By January 2024, Curl had already paid out $70,000 in bug bounties (for 64 confirmed security problems), Stenberg wrote in a blog post. But a “crap” security report means “we missed out time on fixing bugs or developing a new feature. Not to mention how it drains you on energy having to deal with rubbish.”
So this month’s new AI usage guidelines warn contributors that “You must also double-check the findings carefully before reporting them to us to validate that the issues are indeed existing and working exactly as the AI says. AI-based tools frequently generate inaccurate or fabricated results.” Cautioning that AI-detected bug reports can be too wordy (even before their all-too-common “fabricated details”), the guidelines tell users to first verify that the issue is real, and then “write the report yourself and explain the problem as you have learned it.
“This makes sure the AI-generated inaccuracies and invented issues are filtered out early before they waste more people’s time.”
It’s a sincere attempt to explain how “AI slop” interferes with moving the project forward, since Curl must take security reports seriously and investigate them promptly. “This work is both time and energy consuming and pulls us away from doing other meaningful work.
“Fake and otherwise made-up security problems effectively prevent us from doing real project work and make us waste time and resources.
“We ban users immediately who submit made-up fake reports to the project.”
Stenberg also seems to wish there were a financial penalty, since his LinkedIn post adds that, “If we could, we would charge them for this waste of our time.”
“Make It Sound Alarming”
Earlier this month, Stenberg told Ars Technica he was “super happy” the issue was getting attention “so that possibly we can do something about it…” He sees it as a chance to teach the larger community that “LLMs [large language models] cannot find security problems, at least not like they are being used here.” Stenberg also told the site that in one week, he’d received four obviously AI-generated vulnerability reports — and that they’re easy to spot because they’re friendly and polite, with perfect English and nice bullet points. “An ordinary human never does it like that in their first writing…”
One user actually left their prompt in the bug report, Stenberg remembered — “and he ended it with, ‘and make it sound alarming.'”
And early in May, one report claimed it found evidence of memory corruption, with an analysis showing stack recursion from the function:
ngtcp2_http3_handle_priority_frame.
There was just one problem.
“There is no function named like this…” Stenberg found himself posting. He agreed later with embedded systems consultant Jean-Luc Aufranc, who’d summarized the situation like this on LinkedIn: “[T]he AI created a random function… that did not exist in the code, and a security bug to go along.”
In a later comment, Stenberg said it’s a growing phenomenon. In his 2024 blog post, Stenberg highlighted a suspected AI-generated report that “mixes and matches facts and details from old security issues, creating and making up something new that has no connection with reality…” However, Stenberg added on LinkedIn this month that “These kinds of reports did not exist at all a few years ago, and the rate seems to be increasing.”
So while the AI-generated comments are “still not drowning us… the trend is not looking good.”
‘Something Stronger’
Stenberg also told Ars Technica that he’d like to see HackerOne do “something, something stronger, to act on this.” He’s even willing to help them build the necessary infrastructure to create “more tools to strike down this behavior.”
But when a security tester on LinkedIn suggested it was “time to throw out the bug bounty crowd-sourcing model, and hire full-time dedicated staff,” Stenberg disagreed.
“Our bug bounty has paid $86,000 for 78 confirmed vulnerabilities. No professional would come even close to that cost/performance ratio.” And in addition, he notes that Curl has managed to fund exactly one full-time employee. “I would love to hire more people but we struggle to get companies to support us.”
Tobias Heldt, co-founder of cybersecurity company XOR, wondered if the bug-reporting process needed some friction — like “if researchers had to stake a small deposit on their submission,” which was only refunded if their report was rated as clearing a basic threshold for informativeness. Later, Heldt refers to the idea as “Security Report Bonds,” arguing that, “Without it, soon the majority of reports will be from bots brute-forcing bounty programs.”
Stenberg agreed “that could be a workable model,” suggesting maybe companies like HackerOne should be the ones to take the lead.

Python Software Foundation’s Seth Larson (Pycon 2025).
Jim Clover, director of IT/service company Varadius Ltd, came up with a novel solution. He’d vetted the “AI slop” by asking ChatGPT o3, which correctly responded it was “technically unsound… cites functions that don’t exist.” Clover’s conclusion? “Could you stick these through AI (oh the irony) as a BS checker to save you guys time?”
But Python‘s security developer-in-residence Seth Larson had already reached the opposite conclusion, sharing a blog post he’d written in March titled, “Don’t bring slop to a slop fight.”
“[U]sing AI to detect and filter AI content just means that there’ll be even more generative AI in use, not less. This isn’t the signal we want to send to the venture capitalists who are deciding whether to offer these companies more investment money.”
Responses and Reactions
It’s not clear what happens next. “People who are looking to abuse the system will continue to do so regardless of a checkbox being present,” senior full-stack developer Damian Mulligan posted on LinkedIn.
Databricks software engineer Hasnain Lakhani wondered what he’d do when people simply lied about whether they’d used AI. (“Seems like an arms race,” he suggested, with projects needing tools to screen for AI.)
There was an ominous warning from Heldt. “AI slop is overwhelming maintainers *today* and it won’t stop at Curl but only starts there.”
But maybe an effective effort to deal with the problem has already begun. When someone on Mastodon asked Stenberg if it’s okay to repurpose Curl’s new guidelines about AI contributions for another project, Stenberg had a ready answer.
“Absolutely!”