Resonant Coding: or How I Learned to Stop Worrying and Love the Chaos
There’s a phenomenon that occurs when you stare too long at a screen full of code you don’t understand1. The brain starts doing something similar to what it does when you look at clouds: it searches for patterns, faces, familiar shapes. Except instead of seeing a rabbit or your Aunt Martha’s profile, you start seeing intentions. Purpose. A logic that surely must be there, that has to be there, because someone—or something—wrote it with some goal in mind. The problem is when that something is a language model and the goal was simply to generate the most probable sequence of characters given an input, which, if you stop to …
Resonant Coding: or How I Learned to Stop Worrying and Love the Chaos
There’s a phenomenon that occurs when you stare too long at a screen full of code you don’t understand1. The brain starts doing something similar to what it does when you look at clouds: it searches for patterns, faces, familiar shapes. Except instead of seeing a rabbit or your Aunt Martha’s profile, you start seeing intentions. Purpose. A logic that surely must be there, that has to be there, because someone—or something—wrote it with some goal in mind. The problem is when that something is a language model and the goal was simply to generate the most probable sequence of characters given an input, which, if you stop to think about it with the seriousness it deserves, is a rather disturbing way to create the instructions that will move money, control systems, decide things.
This is not paranoid exaggeration2. Or maybe it is. But during the months I spent leading a development team—a new team, built from scratch to support a department that handled numbers with many zeros—that paranoia became something resembling a methodology, or at least something that worked better than having nothing, which was exactly what we had before.
⸻ 1/8 ⸻
The context, because context always matters even when you’d prefer it didn’t: there was pressure from above to use Artificial Intelligence and accelerate everything3. The team had few experienced developers, which meant I had to review almost everything. And when I say “almost everything” I mean that particular situation where every pull request4 that arrived was like opening a Pandora’s box manufactured by a statistical oracle that had read the entire internet but hadn’t necessarily understood any of what it read5.
The first months were a slow-motion disaster. Code that at first glance seemed reasonable but, upon examination, revealed structures that no human programmer would have chosen. Solutions that worked but for the wrong reasons. Patterns that smelled like something, but you didn’t know what until they exploded in production at the worst possible moment, at which point you discovered that the model had interpreted “validate user input” as “do nothing and hope it doesn’t explode.”
There was a moment—and this is important for understanding why I eventually stopped trusting in magical solutions—where the quality of code the team delivered improved dramatically. For approximately forty-eight glorious hours I thought we had achieved something. That the method we’d been refining was working. That we were, finally, good at this.
Then I discovered that Cursor6 had updated.
It wasn’t us. It was the model. And if the model could improve without our intervention, it could also get worse. Or change in ways we didn’t understand. Or be replaced by another model with other idiosyncrasies we’d have to learn from scratch. We were building on quicksand and celebrating every time the sand, for a moment, stopped shifting7.
⸻ 2/8 ⸻
I have to make a detour here because without this detour nothing that follows will make sense, and besides it’s the kind of detour I enjoy because it involves waves, and waves are beautiful in a way that code rarely manages to be8.
A few years ago, one night, in one of those YouTube rabbit holes where you end up without knowing how9, I found a video of a physics demonstration10. Someone grabs a rope—or a string, or something like that—and starts shaking it. At first it’s pure chaos. Formless movement, waves crashing into each other, destructive interference everywhere. But if you find the right frequency, if you shake at exactly the necessary rhythm, something extraordinary happens: chaos organizes itself. Points that don’t move appear—the nodes—and points that oscillate at maximum—the antinodes. Standing waves. Patterns that sustain themselves because the system has entered resonance.
I probably watched that video more times than I’d admit in public. The idea that chaos contains latent structures waiting for someone to find the right frequency to reveal them. That disorder is not the absence of order but potential order seeking to manifest11.
And years later, while looking at another incomprehensible pull request and wondering how the hell we were going to get out of this swamp, I remembered the standing waves. And I thought: maybe the problem isn’t the AI. Maybe the problem is that we’re shaking the rope at any frequency and expecting patterns to appear. To find the right frequency, I first had to understand the rope.
⸻ 3/8 ⸻
Language models, for those who haven’t had the pleasure of interacting with them beyond the occasional chat, are essentially text prediction machines. This sounds simple and in a sense it is: you give them a sequence of words and they return the most probable word that follows. Like your phone’s autocomplete but trained on an amount of data that’s difficult to conceptualize without resorting to astronomical metaphors12.
The problem is that this fundamental simplicity generates emergent behaviors that seem like intelligence, that sometimes function as intelligence, but that aren’t intelligence in any sense that a philosopher of mind would find satisfactory13. They’re simulacra of reasoning built from statistical patterns. And this has very concrete practical consequences:
They have no memory. Every time you talk to them it’s like the first time. The model that helped you solve a bug five minutes ago has no idea you exist, or that there was a bug, or that you solved it together. What some tools call “memory” is actually a trick: they save part of the previous conversation and silently paste it at the beginning of each new message. It’s prosthetic, artificial memory, and it has limits14.
Their attention is limited. They have a context window—the amount of text they can process at once—and the longer the input, the worse they perform. The best way to think about this is to imagine you have a bucket of water and you need to wash dishes15. For one dish, the bucket is more than enough. For ten, the water starts getting cloudy. For a hundred, the water is so dirty that the last dishes come out worse than they went in. And if at any point you throw something greasy into the bucket—irrelevant information, context that doesn’t apply—the water becomes unusable for everything that follows. Moreover, the degradation isn’t uniform: the beginning and end of the context receive more attention than the middle, which explains why sometimes the model “forgets” instructions you gave it three paragraphs earlier16.
They’re probabilistic. For the same input they can give different outputs. This sounds minor but has profound implications: you can’t trust that a result that worked once will work again. Each interaction is a die being rolled, and sometimes you get seven and sometimes you get a snake that eats your prompt17.
⸻ 4/8 ⸻
At some point during summer, during a vacation I spent reading about AI instead of resting —because apparently I have a dysfunctional relationship with free time—, I found two lines of thought that eventually converged into what I now call Resonant Coding. One came from Steve Yegge, a programmer who’s been writing about software for decades with a mix of technical brilliance and opinions that oscillate between visionary and deliberately provocative18. The other came from Dex Horthy and his concept of Context Engineering, which is basically the idea that the context you give a model matters more than anything else19.
From Yegge I took something he calls the Rule of 5, which isn’t so much a rule as an iterative refinement process. The simplified version is: when you generate something, you pass it through five successive filters. First a draft where what matters is that everything is there, even if it’s messy. Then a accuracy review where you fix factual errors. Then clarity, where you simplify and eliminate ambiguities. Then edge cases, where you think about everything that could go wrong. And finally excellence, where you polish and optimize20.
From Horthy I took the obsession for context. The formalization of what I already intuited with the bucket metaphor: that the quality of what you give the model determines the quality of what it gives you back, and that if the problem is too big for a single bucket, you have to divide it into smaller parts and use clean water for each one.
These two sets of ideas, combined with the professional desperation I already mentioned and a moderate amount of caffeine, crystallized into something that seemed to work. We didn’t invent anything new—we simply glued two existing ideas together and gave them a pretentious name—but sometimes innovation is exactly that: seeing that two pieces fit when nobody had put them together before.
⸻ 5/8 ⸻
I’m not going to describe the method as a numbered series of steps because that would betray the spirit of how it actually works, which is more chaotic, more iterative, more like a spiral than a staircase. But there are three general movements that repeat:
First there’s what we might call research—though “reconnaissance” better captures the feeling. Before doing anything, you need to understand the problem. And this is where the model can help: you ask it to investigate the existing code, map dependencies, find relevant documentation, tell you what the hell is going on in this system you inherited from someone who no longer works here21. The model does this work pretty well because it’s essentially reading and synthesis, which is exactly what it was trained for.
But—and this but is crucial—the document the model generates has to be reviewed. Not accepted, reviewed. With the Rule of 5 or something similar, it doesn’t matter, but with the firm conviction that the model may have misunderstood, may have invented things, may have mixed information from different projects because in its training it read similar code and got its wires crossed22. Human review here is not optional; it’s the entire point of the exercise.
Then comes something like planning, which is using the research document (already reviewed) to generate an action plan. The trick here is that each task in the plan has to be small enough to fit in a single bucket. If a task is “implement the authentication system,” it’s too big. If it’s “add JWT token validation to the login endpoint,” we’re better off23. And each of these small tasks goes, again, through the Rule of 5, because a poorly defined plan can generate thousands of lines of incorrect code and by that point it’s too late24.
And finally there’s implementation, which by this point should be almost mechanical. Each task is so well defined that the model has no room to invent. And this is where models truly shine: they can edit twenty files in seconds, create test batteries, refactor entire structures. What would take a human hours. But only because the hard work—the thinking—was already done before25.
⸻ 6/8 ⸻
I know what it looks like: yet another method promising efficiency. But the dominant narrative around AI in programming deeply bothers me: the idea that these tools let you “go faster.” It’s not exactly false, but it’s also not true in the way it’s usually presented. It’s like saying a car lets you get to your destination faster: true, but only if you know how to drive, if you know the way, if the car is in good condition, if the roads are clear. If those conditions aren’t met, the car can take you very quickly anywhere, including off a cliff26.
The same happens with AI. Yes, it can generate code faster than any human could write it. But generating code is not the same as solving problems. And if you generate code without understanding the problem, what you get is a quick solution to the wrong question, which is worse than having no solution because now you have to undo what you did before you can move forward.
The method I describe is not a shortcut. It’s a process that takes more time than throwing a prompt at the model and hoping something good comes out. But that time is recovered multiplied because errors are detected early, because work doesn’t have to be redone, because when something is implemented you already know it’s correct27.
⸻ 7/8 ⸻
The method already had a shape. It needed a name.
For a while I doubted the name. “Resonant Coding” sounds pretentious, I know. There were alternative versions: “Structured Prompting,” which was too generic; “Context-First Development,” which sounded like a management consultancy; “The Bucket Method,” which was too literal and besides nobody was going to take seriously something called that28.
But I kept coming back to the image of standing waves. To the idea that there’s a right frequency waiting to be found. To the difference between shaking the rope chaotically and shaking it with precision.
The “Vibe Coding” that became popular29 proposes something like going with the flow, trusting the model, iterating until something works. I’m not saying it’s useless—there are situations where that approach is perfectly valid—but for serious work, for systems that have to function, for code that’s going to be maintained by other humans, you need something more rigorous. You need to find the resonance30.
⸻ 8/8 ⸻
I should probably close with some elegant conclusion but the truth is I don’t have one. What I have is a process that works better than having no process, that can be refined, that generates reusable artifacts31, that forces you to think before acting. It’s not perfect. There are days when everything fails anyway. There are models that resist cooperating. There are problems that are genuinely difficult and no method makes them easy.
But when it works—when the context is well constructed, when the plan is well defined, when each task fits in its bucket and the water is clean—there’s a moment where everything clicks. Where the model does exactly what you need it to do. Where the code it generates is the code you would have written yourself, only faster and probably with fewer errors. In those moments you understand, viscerally, what it means to find the right frequency. And then, invariably, the next incomprehensible pull request arrives, and you have to start again.
There’s a question that haunts me lately, one that has no easy answer and that I’d rather leave resonating than close with a false conclusion: what happens to those who come after? I’m not just talking about programmers—though them too—but about an entire generation that will enter the job market when these tools are ubiquitous32. Call centers are already disappearing. The other day my bank called and I was answered by a voice that sounded human but was a robot. The conversation was smoother than most I’ve had with humans in that context33. Entry-level jobs, the ones that served to learn the trade by making mistakes on things that didn’t matter too much, are evaporating. And meanwhile we keep training people as if tomorrow’s world were the same as yesterday’s. But it’s not just young people who suffer this transition.
There’s nothing closer to my personal image of hell than going to a bank and waiting hours watching retirees asking for help understanding how to access their own money. Someone from the bank—with that face of someone who’s lost all hope—guides them toward a screen that might as well be a sign drawn with crayons for how well it works. Or they tell them to use their phone, knowing their fingers can’t hit the tiny letters or navigate the labyrinthine apps where to see your balance you have to be Indiana Jones of technology34. And here’s something that gives me a sliver of optimism: maybe, just maybe, AI can be used to design interfaces that accommodate everyone. That adapt. That guide. That are truly simple, not simple-for-the-person-who-designed-it.
But all this has a cost, and I’m not just talking about the human cost35. Every prompt, every iteration, every bucket of clean water we use consumes tokens, and tokens cost money36. It’s easy to forget when the tool works well, but there’s a new economy emerging here, one where the scarce resource isn’t machine time but context capacity, and where being strategic about how you use your tokens can be the difference between a viable project and one that eats through the budget before delivering anything.
What happens when they run out? When the model you were using goes up in price or disappears? What skills are we failing to develop because it’s easier to ask the model to simulate them?37 Do we become more efficient or simply more dependent?38
I don’t have answers. I suspect nobody does yet. But it seems to me these are the questions we should be asking ourselves now, while there’s still time to influence how they’re answered.
For a practical guide to the method, see Resonant Coding: Practical Guide.