Linux Kernel Czar Greg Kroah-Hartman: AI Bug Reports Went from 'Junk' to Legit Overnight
Linux kernel maintainer Greg Kroah-Hartman says AI-generated security reports for the Linux kernel experienced a dramatic quality inflection about a month ago — going from "AI slop" to genuinely useful reports that are "real and good."
The Shift
Before
"Months ago, we were getting what we called 'AI slop' — AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us."
After
"Something happened a month ago, and the world switched. Now we have real reports... All open source projects have real reports that are made with AI, but they're good, and they're real."
What Nobody Can Explain
The most striking aspect: nobody knows what changed.
- Was it better local tools? Unknown.
- Did people figure something out? Unknown.
- Different companies, different tools — all improved simultaneously.
- "There must have been some inflection point somewhere with the tools."
Experimental Results
Kroah-Hartman tested AI-generated patches directly:
- Prompt: "Give me this" → AI produced 60 bug reports with fixes
- ~1/3 were wrong but pointed to real problems
- ~2/3 of patches were correct (needing cleanup, better changelogs, integration work)
"The tools are good. We can't ignore this stuff."
Impact on Open Source
Linux Kernel
"For the kernel, we can handle it" — large distributed team, capacity to review
Smaller Projects
"We need help on this for all the open source projects" — smaller teams may be overwhelmed by the sudden flood of reports
Infrastructure
- Sashiko: Google-developed code review tool donated to Linux Foundation
- AI increasingly used in review, not just generation
- "co-develop" tag emerging for AI-assisted patches
Source: The Register (KubeCon Europe interview)