menu
close

AI Tools Slow Experienced Coders Despite Perceived Benefits

A rigorous study by METR found that experienced open-source developers using AI tools like Cursor Pro with Claude 3.5/3.7 Sonnet took 19% longer to complete coding tasks compared to working without AI assistance. The randomized controlled trial involved 16 veteran developers working on 246 real-world tasks from their own repositories. Surprisingly, developers believed AI made them 20% faster, revealing a significant disconnect between perception and reality.
AI Tools Slow Experienced Coders Despite Perceived Benefits

A groundbreaking study has challenged the prevailing narrative about AI coding assistants boosting developer productivity across the board.

Model Evaluation and Threat Research (METR) conducted a randomized controlled trial to measure how early-2025 AI tools affect the productivity of experienced open-source developers working on their own repositories. Surprisingly, they found that when developers used AI tools, they took 19% longer than without—AI actually made them slower.

The research tracked 16 seasoned open-source developers as they completed 246 real-world coding tasks on mature repositories averaging over one million lines of code and 22,000+ GitHub stars. Tasks were randomly assigned to either allow or prohibit AI tool usage, with developers primarily using Cursor Pro with Claude 3.5 and 3.7 Sonnet during the February-June 2025 study period.

The results surprised everyone, including the study participants themselves. Even after completing their tasks, developers estimated that AI had increased their productivity by 20%, when the data clearly showed a 19% decrease. This highlights a critical insight: when people report that AI has accelerated their work, they might be completely wrong about the actual impact.

METR researchers identified several potential reasons for the slowdown. Developers spent much more time prompting AI and waiting for responses rather than actually coding. The study raises important questions about the supposed universal productivity gains promised by AI coding tools in 2025.

However, this doesn't mean AI tools are broadly ineffective. METR notes that in unfamiliar codebases, early-stage projects, or for less experienced programmers, AI could still accelerate progress. The researchers are planning future studies to explore those cases. They also stress that this was a snapshot of early 2025 tooling, and faster models, better integration, or improved prompting practices could shift the equation.

For teams deploying AI assistants, the message is clear: AI coding tools continue to evolve, but in their current form, they don't guarantee speed gains—especially for seasoned engineers working on code they already understand. Organizations should test before they trust, measure impact in their actual environment, and not rely on perceived speed alone.

Source:

Latest News