The so -called Slop AI, which means that low quality, video and text images created by LLM have taken over the internet for the last two years, polluting websites, Social media platformsat least a newspaperAnd even the events of the real world.
The world of cyber security is not immune to this problem. Last year, people across the cyberspace have raised concerns about the AI Slop Bug Bug reports, which means that reports that claim to have found vulnerabilities that do not actually exist because they were created with a large linguistic model that was simply a vulnerability.
“People receive reports that they sound reasonable. They look technically correct. And then you end up digging them, trying to understand,” Oh no, where is this vulnerability? “,” Vlad Ionescu; the co -founder and the CTO RunsybilA start -up that develops a AI -powered error hunters, told TechCrunch.
“It turns out to be just an illusion throughout.
Ionescu, who worked at the Meta Red Team with the Hacking Committee from the interior, explained that one of the issues is that LLMS is designed to be useful and to give positive answers. “If you ask for it for a report, it will give you a report and then people will copy and paste them to the platforms of bug bounty and flood the platforms themselves, flood customers and reach this frustrating situation,” Ionescu said.
“This is the problem that people are running. It is that we get a lot of gold -like things, but it’s actually just crap,” Ionescu said.
Just in the last year, there were examples of real world. Harry Sintonen, a security researcher, revealed that the open source project project has received a false report. “The attacker underestimated badly,” Sintonen wrote In a post on Mastodon. “Curl can smell AI slop from miles away.”
In response to Sintonen’s position, Benjamin Piouffle of Open Collective, a technological platform for non -profit organizations, said That they have the same problem: that their inbox “flooded with AI trash”.
An open source programmer who maintains the Cyclonedx project in Github, completely pulled the generosity of their errors Earlier this year after receiving “almost entirely AI Slop reports”.
The top Bug generosity platforms, which are essentially acting as intermediaries between Bug Bounty hackers and companies who are willing to pay and reward them to find defects in the products and their software, also see a spike in the reports created by AI.
Contact us
Do you have more information on how AI affects the industry in cyberspace? We would like to hear from you. From a device and non-work network, you can contact Lorenzo Franceschi-bicchierai safely on the signal on +1 917 257 1382, or through the telegram and keybase @lorenzofb or email.
Michiel Prins, co -founder and senior product management manager at Hackerone, told TechCrunch that the company has met some AI slop.
“We have also seen an increase in false positive-polished points that really look, but are produced by llms and have no real impact,” Prins said. “These low signaling submissions can create noise that undermines the effectiveness of security programs.”
The prins added that reports containing “illusions vulnerabilities, unclear technical content or other forms of low effort are treated as unwanted”.
Casey Ellis, the founder of Bugcrowd, said there are certainly researchers using AI to find errors and write the reports they subsequently submit to the company. Ellis said they see a total increase of 500 submissions a week.
“AI is widely used in most submissions, but it has not yet caused a significant spike in low -quality” Slep “reports,” Ellis told TechCrunch. “This will probably escalate in the future, but it’s not yet here.”
Ellis said that the Bugcrowd team analyzing submissions revises hand reports using established playbooks and work flows, as well as mechanical learning and “AI help”.
To determine if other companies, including those who manage their own bug bounty programs, also receive an increase in the edges of reports or reports containing non -existent llms delivered by LLMS, Techcrunch comes in contact with Google, Meta, Microsoft and Mozilla.
Damiano Demonte, a spokesman for Mozilla, who is developing the Firefox browser, said the company “has not seen a significant increase in the limbs or low quality reports that all of Month’s reports appear to be.
Mozilla employees who revise error reports for Firefox do not use AI to filter reports, as it would probably be difficult to do so without the risk of rejecting a legal error report, “Demonte told an email.
Microsoft and Meta, both betting companies at AI, refused to comment. Google did not respond to a request for comments.
Ionescu predicts that one of the solutions to the problem of increasing AI Slep will be to continue investing in AI -operating systems that can at least execute a preliminary review and filter out for accuracy.
Actually, the third, hackerone launched Hai Triage, a new triaaging system that combines people and AI. According to Hackerone, this new system utilizes “AI security agents to reduce noise, double flag and prioritize real threats”. Human analysts then enter to validate error reports and escalate according to the needs.
As hackers are increasingly using LLMS and companies are based on AI to classify these reports, it remains to see which of the two AISs will prevail.
