5 min readNov 3, 2025
–
2025 is the year I started using AI to write code in earnest. Still only on side projects: my main project has — for better or worse — banned the use of it. In early 2025, I started with accepting edits with tabs, went through the intricate prompting phase, and now I feel like I’m just conversing with my code by way of an agent.
But there is a catch, and it goes to the heart of what humans should be focusing on: the conceptual bottleneck.
Prompt engineering is fading into the background
In early 2025, it felt like getting the AI to do what you wanted within a codebase took real skills in terms of engineering intricate prompts and providing their conte…
5 min readNov 3, 2025
–
2025 is the year I started using AI to write code in earnest. Still only on side projects: my main project has — for better or worse — banned the use of it. In early 2025, I started with accepting edits with tabs, went through the intricate prompting phase, and now I feel like I’m just conversing with my code by way of an agent.
But there is a catch, and it goes to the heart of what humans should be focusing on: the conceptual bottleneck.
Prompt engineering is fading into the background
In early 2025, it felt like getting the AI to do what you wanted within a codebase took real skills in terms of engineering intricate prompts and providing their contexts. Getting this wrong would result in nonsensical edits. For example, it really helped to explicitly tell the AI to double check and close delimiters in code.
Now, working with agents feels effortless: you express what you have in mind without worrying about context or precise instructions. Agents seem good at figuring out their own context by searching and reading snippets of code; their edits are also small, and their work iterative. The conversation you are having with the agent adds another iterative dimension(the agent iterates on each task, and you iterate with the agent toward completion of the current project).
I am well-aware that coding agents themselves are the result of intricate engineering. But even in that space, as LLM’s get better — thanks to increasing training data from artifacts generated through intricate prompt and agent engineering — and people learn to tame their instinct for overengineering, the result is likely to be agents getting stuff done by planning and iterating from within a standard — albeit sandboxed — environment.
I once thought: if humans are not going to write code anymore, at least there might be room left for expert prompt engineers. But like previous engineering paradigms in AI, it looks like prompt engineering is fading away as well. I now think that only one thing will be left for humans: knowing what ones want. That will take skills and knowledge, but unrelated to AI wrangling.
As an example: here is a prompt I used to get the AI to translate a TLA+ spec into Rust. This project won third prize at the TLAi+ Challenge. A prompt like that one is also unnecessary by now, as shown by my experience on a more recent project involving a spec and Rust code(but no intricate prompting).
This situation also points to the main advantage of AI: that it can do work without humans having to engineer the structure of the work. For example, you can paste some data into any format — even with formatting errors — into a chat context, add to that a somewhat vague description of the task full of typos, and the AI will just try to make sense of it. This is an advantage over traditional software solutions which require the engineering of precise data formats and algorithms.
You do have to know what you want, and review the agent’s output, or else you get entropy: spaghetti code is the default outcome of unsupervised agents.
Conceptual bottlenecks
On my last little side project, I ran into something I qualify as a conceptual bottleneck: a problem the AI couldn’t resolve successfully but which was obvious to me at a glance.
I’ll keep the description simple: the project does some computing and offscreen rendering on a background thread, and up to that point, the main thread would only be used to present the offscreen rendered content — there was no user input handling.
So the first user input I wanted to handle was pressing escape, which should reset the state of the program. I started by giving the AI a high-level description of the feature, and let it write the code without further guidelines. This resulted in a huge mess related to coordination between the main thread handling UI events, and the rendering thread — the AI just kept digging itself into a deeper hole.
When I looked at the state of the code, I knew almost instantly what the solution should be: instead of trying to reset the rendering when the escape key press event comes-in — which is how the AI has translated my high-level description of the feature — one had to only note the pending reset as a flag local to the main-thread, and then propagate it to the rendering thread at the next redraw. The AI was stuck trying to do things according to the architecture that had been setup so far, and was unable to recognize that we had reached a point were adding this feature required evolving the architecture —we had reached a conceptual bottleneck.
Here is how the AI describes the episode in its own words.
What I find interesting is that, once this bottleneck was resolved, the AI was able to immediately bang-out another feature building on the evolved architecture — allow the user to scroll the screen — without any problem.
This is a clear indication that while the AI can write the code, it cannot design software.
What about learning to code?
Various people have, in-person, expressed to me the idea that learning to code is a waste of time in the light of the progress in AI. I disagree for a simple reason: there is no better way to instruct the AI to code than knowing how to code yourself.
But that doesn’t mean that you need a traditional computer science education. What matters is not so much the coding as the conceptualization of digital systems. And by the way, that was true already ten years ago.
I’m not sure if people have given up asking algorithmic job interview questions, but I remember those as a complete waste of time: testing on the trivia of standard algorithms — ‘Implement a hash map’ or ‘Invert a binary tree’ — and assuming that as a yardstick for one’s ability to write software.
That culture often produced what I call “optimized spaghetti”: code that resembled standard algorithms, but which conceptually made no sense. It was the proverbial email client architected like a linked-list (but with minimal memory allocations!).
People made massive efforts to fit into such a culture, and the irony is that AI made it obsolete: the agent is the world champion of optimized spaghetti.
Instead, good software requires coming up with an architecture that makes sense at each level of abstraction in the light of the peculiarities of the project at hand — resolving the conceptual bottlenecks.
So how should you learn to code in 2025? To answer that question I’ll tell you how I, back in 2012, learned how to code: by making slight changes and looking at the results. Back then, what I started with wasn’t even coding, it was just HTML with Bootstrap CSS applied to it. But I learned that when I made a slight change, the output shown on the screen after a reload would change, and I took it from there.
Today I would probably not be making manual changes to HTML, but rather doing it by way of an AI-driven conversational interface. But the principle remains: different instructions lead to different results, and yes you should learn to understand the actual output. Not in order to be able to write it by hand, but to understand how a system works, which is the only way to be able to ask an AI to implement what you want.