The "AI makes engineers 10x faster" pitch is breaking.
A peer-reviewed RCT just showed experienced devs were 19% slower with AI tools.
They thought they were 20% faster.
I've written a book on AI-assisted coding and I train engineering teams on this almost every week. I'm pro-AI in the workflow. I'm not pro-lying about it.
Here's what the data actually says when you stack the studies:
π Juniors get a real boost. AI is a great tutor.
π Seniors slow down on their own codebases.
π Core maintainers review more code after AI adoption, while their own output drops.
π Team-level delivery stability drops even when individuals feel faster.
The mechanism is as simple as this: AI-generated code is fast to write and slow to review. The writer feels the speed, but the reviewer eats the cost. Inside a team, those are usually different people.
This is why "we shipped 90% more PRs this quarter" means nothing on its own. PRs are not throughput. Merged-and-stable code in production is throughput.
The asymmetry compounds:
π The junior ships fast.
π The senior gets slower reviewing AI-generated work they didn't write.
π The senior's own original output drops because review is now their full-time job.
π 6 months later, the team thinks AI is working. The senior is burned out, and the codebase is full of debt they didn't choose.
So no, the answer is not "stop using AI". The answer is: measure team throughput, not individual velocity. Pay seniors for review quality, not just lines shipped, and treat AI-generated code as a draft that costs review time, not a finished asset.
The teams that figure this out in 2026 own 2027. The rest will spend 2027 paying down debt nobody can explain.
Have a great week!
Aymen