Developers don’t trust AI-generated code

Bored developer

AI is predicted to generate two-thirds of code by 2027, but 96 percent of developers do not trust that it’s functionally correct according to a new study from Sonar.

This lack of confidence means that developers are forced to spend more time on reviewing code. Developer work remains fixed at 10 hours per week, regardless of AI use, but there’s a significant shift to reviewing AI-generated code to ensure it is secure, reliable, and maintainable.

Less-experienced developers report 40 percent productivity gains from AI, but they’re also the ones struggling the most. 66 percent say AI code often ‘looks correct but isn't reliable’ and 40 percent say reviewing AI code takes more effort than human code. This means they’re getting speed gains, but at the cost of harder, more time-consuming review work.

Shadow AI is a problem too, 35 percent of developers use AI via personal accounts, creating fragmented toolsets that make standardized review difficult. Without knowing whether sensitive data was exposed through public models, security teams face an impossible review task on top of the existing review bottleneck.

While smaller businesses report 39 percent productivity, 65 percent are spending more time correcting AI code compared to enterprises -- demonstrating that without verification guardrails, speed gains are ultimately swallowed up requirements for rework.

Despite all of this there are some areas where the use of AI proves particularly effective. For example, writing documentation (74 percent effective), explaining or understanding existing code (66 percent effective), vibe coding/green-field prototyping (62 percent effective), and generating tests (59 percent effective).

The report’s authors note, “Developers are pragmatic: they’ve fully embraced AI as a daily assistant, using it to write documentation and generate tests. But they also know its limits, showing less confidence in its ability to handle complex, existing code. This gap between high usage and selective effectiveness isn't just about features; it's about confidence. When the stakes are high, how much do developers really trust the code AI generates? This brings us to the core of the issue: developer trust.”

You can get the full report of the Sonar site.

Have you tried using AI to generate code? How did you find it? Let us know in the comments.

Image credit: Syda Productions/Dreamstime.com

Why Trust Us

At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

© 1998-2026 BetaNews, Inc. All Rights Reserved.