
Yesterday I ruminated on why – if the claims about “AI” productivity gains in programming are to be believed – we see no evidence of significant numbers of “AI”-generated or “AI”-assisted software making it out into the real world (e.g., on app stores).
I get a sense that there may be some kind of “Great Filter” that prevents projects from evolving to that advanced stage. And I have a feeling I might know what it is.
I’m imaging software development as an iterative, goal-seeking algorithm.
What would its time complexity look like? I reckon the factors would be:
- Batch size – how much change…

Yesterday I ruminated on why – if the claims about “AI” productivity gains in programming are to be believed – we see no evidence of significant numbers of “AI”-generated or “AI”-assisted software making it out into the real world (e.g., on app stores).
I get a sense that there may be some kind of “Great Filter” that prevents projects from evolving to that advanced stage. And I have a feeling I might know what it is.
I’m imaging software development as an iterative, goal-seeking algorithm.
What would its time complexity look like? I reckon the factors would be:
- Batch size – how much changes in each iteration
- Feedback “tightness” – how much uncertainty is reduced in each iteration
- Cost of change – how able are we to act on that feedback?
I suspect “coding”, as a factor, would shrink to nothing at scale.
Basically, batch size, feedback loops and cost of change are doing the heavy lifting
I could go even further. Maybe the cost of change, at limits, becomes simply a function of how long it takes to understand the code and how long it takes to test it (and I’d include things like code review in testing).
Far from helping, attaching a code-generating firehose to development has already proven to work against us in these respects if we loosen our grip on batch sizes to gain the initial benefits.
And if we don’t loosen our grip – if we keep the “AI” on a tight leash – coding, as a factor, still shrinks to nothing in the Big O. Even the most high-performing teams see modest improvements at best in lead times. Most teams slow down.
All this might explain why the productivity gains of “AI” coding assistants vanish at scale, and why we see no evidence of significant numbers of “AI”-assisted projects making it out of the proverbial shed.
When user experience, reliability, security and maintainability matter, we’re forced to drink from the firehose one small mouthful at a time, taking deep breaths between so as not to let it overwhelm us. When you’re drinking from a firehose, the limit isn’t the firehose.
For sure, teams are using this technology on code bases where those things matter, but we’re already seeing from tech companies who’ve boasted publicly about how much of their code is “AI”-generated what the downstream consequences can be.
So, for real productivity gains, that constrains “AI” coding assistants to projects where those things don’t matter anywhere near as much. Personal projects, prototypes, internal tools, one-offs etc. I don’t think anybody disputes that this technology is great for those kinds of things. But they don’t often make it out of the shed.
At least, I very much hope they don’t.
I’ve done a lot of research and experimentation to try to establish how to get better results using LLMs, but I can’t hand-on-heart promise that they’ll do much more than mitigate harms. They’re very much focused on batch sizes, feedback loops and cost of change – the stuff we already *know *works, “AI” or not.
I have reasons to suspect that teams who are showing modest gains using “AI” have actually tightened up their feedback loops to adapt to the firehose, which could be thought of as a kind of stress test for development processes. It’s entirely possible that this is what’s giving them those small gains, and not the “AI” at all.