Last year, around May, I wrote a draft about coding with AI that I never published. The core idea stuck with me. As the tools evolved, that idea kept proving itself in practice. Eventually, I decided to come back to it, update it, and finally put it out.
The lesson is still counterintuitive, and it still feels true.
The best way to write good code with AI is to not try writing good code at first.
That sounds wrong if you were trained the traditional way. We are taught to keep things clean, keep things modular, optimize early, avoid waste, and get it right on the first pass.
But the tools have changed, and the workflow has changed with them.
Now in 2026, we are no longer just dealing with autocomplete or single shot code generation. We are working with agents. These models c…
Last year, around May, I wrote a draft about coding with AI that I never published. The core idea stuck with me. As the tools evolved, that idea kept proving itself in practice. Eventually, I decided to come back to it, update it, and finally put it out.
The lesson is still counterintuitive, and it still feels true.
The best way to write good code with AI is to not try writing good code at first.
That sounds wrong if you were trained the traditional way. We are taught to keep things clean, keep things modular, optimize early, avoid waste, and get it right on the first pass.
But the tools have changed, and the workflow has changed with them.
Now in 2026, we are no longer just dealing with autocomplete or single shot code generation. We are working with agents. These models can read an entire repo, plan multi step changes, jump across files, run commands, write tests, fix their own mistakes, and keep iterating.
Once you accept that shift, a different principle becomes obvious.
Your first pass is not production code. Its more of a sketch.
That is why I sometimes follow this rule when collaborating with AI.
Build broad. Refine later.
As code generation has become cheaper and faster, a new bottleneck has appeared.
The question is no longer “can I implement this?” It becomes “can I shape this into something with taste?”
Agents will happily overdeliver. They will add edge cases you did not ask for, invent abstractions you did not earn, and build entire systems when you only wanted a prototype.
If you fight that instinct immediately, you reduce the agent to a mechanical tool. You get correctness, but you lose momentum.
If you let it happen, you get something more valuable than clean code.
You get material.
This was one of the main realizations behind the original May draft, and it has only become more relevant since.
Prompts are not just technical instructions. They are creative direction.
When I prompt an agent like a strict project manager, I get safe, minimal output. It works, but it rarely surprises me.
When I prompt like a designer, the output changes.
Instead of writing something like:
“Build a settings screen.”
I write something closer to:
“Build a settings screen that feels calm and intentional. It should feel like a tool, not a toy. Prefer clarity over cleverness. Keep the microcopy human. Add one small moment of delight that does not get in the way.”
This is not fluff. It is steering.
One important update since early 2025 is that newer models require far less prompting to reach strong results. Models like Google’s Gemini 3 Pro, along with recent updates across other foundation models, are much better at inferring intent, structure, and reasonable defaults with lighter guidance.
The trend is clear. Prompting is getting cheaper, not more complex.
But even as prompts get shorter, intent still matters. Mood still matters. You can say less, but what you say still sets the direction.
When I first wrote about this idea, “build broad” mostly meant letting a single agent overbuild and then trimming it down.
Now the workflow is wider.
Modern IDEs and agent tools can spin up multiple approaches in parallel. You can ask for different implementations, architectural takes, or UI interpretations without committing to any of them yet.
This changes the role you play.
Instead of hoping your first attempt is the right one, you create a small field of options. One approach might be fast and messy. Another might be clean but boring. A third might introduce an interaction or structure you would not have considered.
Your job is not to accept everything. Your job is to curate.
That is the real leverage. Not that the agent writes code, but that it generates options fast enough that taste becomes the primary constraint.
This is the loop that has emerged for me by the end of 2025.
Describe the outcome, the feel, and the non goals. The outcome is what it should do. The feel is how it should behave or read. Non goals prevent accidental complexity.
Tell it explicitly to prototype first. Not to optimize. Not to be perfect.
This is where agentic tools shine. They can touch multiple files, wire things together end to end, and get something alive quickly.
This is the most important step.
Go through the output and keep what resonates. A component pattern that matches how you think. A micro interaction that adds personality. A folder structure that actually makes sense to you.
Everything else is disposable. There is no guilt in deleting generated code.
Only after the vibe exists do you clean.
This is when you delete dead code, rename everything that feels slightly off, collapse premature abstractions, extract only the utilities that earn their place, add tests that lock in behavior, and tighten boundaries between modules.
This phase looks a lot like traditional engineering. The difference is that the personality of the system is already there.
One thing that has changed since early 2025 is the level of responsibility required.
Agents can now run commands, follow plans, and act across your environment. Speed is cheap, but mistakes propagate faster.
I still treat every diff seriously. I review changes line by line. I avoid blindly running commands I do not understand. I keep tasks scoped and incremental.
Agents make iteration faster. They do not replace judgment.
Building broad works because it matches how these tools actually behave.
Modern models are increasingly capable, increasingly contextual, and increasingly autonomous. They respond not just to instructions, but to framing. If you constrain them too early, you get timid output. If you let them explore early, you get momentum.
Momentum matters.
Most projects die before they feel alive. Broad building helps you cross that dead zone faster. Refinement then turns that raw energy into something stable and real.
That was true when I first wrote about this idea in May. It is even more true now.
Do not try to get it right. Try to get it interesting. Do not start clean. End clean.
No posts