Anthropic announced last week that Claude found over 500 vulnerabilities in production open-source codebases. Their framing was optimistic: AI as a force multiplier for security teams.
The reality is more complicated.
Of those 500+ vulnerabilities, only two or three have actually been fixed. That’s according to Guy Azari, a former security researcher at Microsoft and Palo Alto Networks. The National Vulnerability Database already had a backlog of 30,000 CVE entries awaiting analysis in 2025. Nearly two-thirds of reported open-source vulnerabilities lacked a severity score.
And now AI is making the discovery rate 100 to 200 times faster.
The curl project — one of the most widely used pieces of software on Earth — recently shut down its bug bounty programme entirely. Not because they’d fixed everything, but because they were drowning in low-quality AI-generated reports and couldn’t sort the signal from the noise.
This is the pattern I keep seeing with AI: it creates the problem and the solution simultaneously, but the problem arrives first.
AI can find vulnerabilities at inhuman speed. It can probably fix them at inhuman speed too. But right now, the discovery capability has outpaced the remediation infrastructure. We’re uncovering holes faster than anyone can understand them, let alone patch them.
For businesses — particularly PE-backed ones running on legacy tech stacks — this should be front of mind. Every software system just became more exposed, not because the vulnerabilities are new, but because the tools to find them got dramatically better. The attackers have access to the same AI models as the defenders.
The companies that deploy defensive AI early will have an advantage. The ones that wait will find themselves dealing with vulnerabilities they didn’t know existed, discovered by tools they didn’t know were being pointed at them.
The security conversation has changed. “Are we patched?“ used to be the question. Now it’s “can we process what we’re finding fast enough to matter?“
Sources:

Leave a Reply