Many have compared the advancements in LLMs for software development to the improvements in abstraction that came with better programming languages.
But there’s a key difference. While the end product of software development, a binary, has never drastically changed its form (though its distribution has radically changed), the intermediate product did with programming language transitions.
The intermediate product is the source code itself. The intermediate goal of a software development project is to produce robust maintainable source code. The end product is to produce a binary. New programming languages changed the intermediate product. When a team changed from using assembly, to C, to Java, it drastically changed its intermediate product. That came with new tools built aroun…
Many have compared the advancements in LLMs for software development to the improvements in abstraction that came with better programming languages.
But there’s a key difference. While the end product of software development, a binary, has never drastically changed its form (though its distribution has radically changed), the intermediate product did with programming language transitions.
The intermediate product is the source code itself. The intermediate goal of a software development project is to produce robust maintainable source code. The end product is to produce a binary. New programming languages changed the intermediate product. When a team changed from using assembly, to C, to Java, it drastically changed its intermediate product. That came with new tools built around different language ecosystems and different programming paradigms and philosophies. Which in turn came with new ways of refactoring, thinking about software architecture, and working together.
LLMs don’t do that in the same way. The intermediate product of LLMs is still the Java or C or Rust or Python that came before them. English is not the intermediate product, as much as some may say it is. You don’t go prompt->binary. You still go prompt->source code->changes to source code from hand editing or further prompts->binary. It’s a distinction that matters.
Until LLMs are fully autonomous with virtually no human guidance or oversight, source code in existing languages will continue to be the intermediate product. And that means many of the ways that we work together will continue to be the same (how we architect source code, store and review it, collaborate on it, refactor it, etc.) in a way that it wasn’t with prior transitions. These processes are just supercharged and easier because the LLM is supporting us or doing much of the work for us.
As an aside, I think there may be an increased reason to use dynamic interpreted languages for the intermediate product. I think it will likely become mainstream in future LLM programming systems to make live changes to a running interpreted program based on prompts. The hit run refresh cycle will be reduced to zero. That will be vibe coding to the max. Perhaps this is already more mainstream now than I’m aware.