TLDR: Code is becoming abundant, generative, and disposable rather than scarce and precious. As AI collapses the cost of implementation, readiness becomes the real constraint—whether users, organizations, and ecosystems can absorb change at the speed we can now generate it. The developer’s role transforms from writing code to governing its evolution: defining intent, maintaining coherence, protecting trust.
On the name: Something like “genware” might seem more accurate, but it misses the essence. Fluidware points to the transformation itself—code becoming liquid, disposable, abundant—rather than the generative technology that enabled it. The mechanism matters less than the shift.
I. From Math to Fluidware: A History of Increasing Fluidity
Computing’s history is ri…
TLDR: Code is becoming abundant, generative, and disposable rather than scarce and precious. As AI collapses the cost of implementation, readiness becomes the real constraint—whether users, organizations, and ecosystems can absorb change at the speed we can now generate it. The developer’s role transforms from writing code to governing its evolution: defining intent, maintaining coherence, protecting trust.
On the name: Something like “genware” might seem more accurate, but it misses the essence. Fluidware points to the transformation itself—code becoming liquid, disposable, abundant—rather than the generative technology that enabled it. The mechanism matters less than the shift.
I. From Math to Fluidware: A History of Increasing Fluidity
Computing’s history is rigidity progressively giving way to mutability.
Mathematics sits at the origin—pure, abstract, immutable. Formal logic follows, then logic gates made physical, then hardware arranging those gates into systems. Hardware stays fixed once fabricated (for now). Firmware introduced updatable control. Software eliminated the last constraints: behavior could change without touching the substrate.
Fluidware is an attempt at naming what comes next. Software practiced as continuous generation and verification rather than manual implementation. Executable code becomes abundant and transient. The durable assets are intent, interfaces, examples, and tests. The marginal cost of producing code falls toward zero while the cost of correctness and trust rises.
This isn’t a new technology or substrate. It’s software approached differently—governed for change at the speed abundance allows.
II. On Vision Versus Opinion
This isn’t about how I think things should be. It’s about how I see things becoming.
If my time in tech has taught me anything: the best-engineered solution rarely wins. Elegant architecture loses to messier, faster, more pragmatic alternatives regularly. Market forces prioritize velocity and impact over engineering purity. The system that aligns with demand and human behavior tends to dominate, regardless of theoretical technical merit.
Many of us don’t favor wholesale abandonment of carefully constructed architectures. Some of us will miss the craft of hand-optimization. But fluidware emerges as a natural consequence of existing dynamics, not because anyone decreed it should.
III. When Code Becomes Fluid
The essential shift: code stops being precious.
For decades, we’ve treated code as valuable and scarce. We built entire methodologies around the difficulty of creating and maintaining it. This made sense because code was expensive. Writing, debugging, maintaining it consumed enormous time and attention. The cost of creation was the dominant constraint.
Fluidware emerges when that constraint collapses.
When code becomes cheap to generate and modify—when refactoring across a codebase takes a prompt instead of a sprint—the economics shift fundamentally. Implementation stops being the bottleneck.
The question changes from “Can we build this?” to “Should this exist?”
IV. The Readiness Bottleneck
When generation becomes trivial, a new constraint emerges: readiness.
Not technical readiness. User readiness. Organizational readiness. The capacity to absorb change.
Think about what happens when you can generate a production-ready CRUD application in thirty seconds. The technical barrier is gone. But that’s not the hard part anymore. The hard parts are:
- Do users trust this enough to adopt it?
- Does it fit existing workflows without breaking them?
- Can the organization support it when things go wrong?
- What happens when we iterate again tomorrow?
Speed of change has always created friction. We’ve just never had the ability to change things this fast. When you can regenerate an entire system overnight, the limiting factor becomes how fast humans and organizations can adapt.
This creates a fundamental tension. We can now change everything, constantly. But change destroys trust. Move too fast and users revolt. Systems fragment. Technical debt accumulates not in the code—that regenerates—but in the gap between what exists and what users understand.
The paradox: The easier it becomes to change everything, the more crucial it becomes to change nothing without validation.
This is why testing becomes everything. Why contracts and invariants matter more than ever. Why the durable assets aren’t implementations but the specifications that govern them.
The winners in the fluidware era won’t be those who generate code fastest. They’ll be those who learned to govern change—to protect coherence and maintain trust while everything beneath the surface stays fluid.
V. Implementation as Dialogue
The developer’s role must transform. This has always been true, but in this era it will be accelerated.
I’m going to provide a practical example of a gen-AI workflow for accomplishing something that fits well into the model I’ve been observing take shape: the idea that implementations will become fluid and the primary burden will shift to validation. For anyone that’s already built multi-step AI pipelines or even spent much time thinking about it, the below will feel a bit kindergarten, but bear with me…
Consider a performance bottleneck within module of a game engine. Traditionally, solving this had meant writing more code—often dropping to a lower-level language, creating more complexity to manage, more abstractions to maintain.
LLMs change the equation:
- They generate valid code 50-200x faster than (the fastest) humans can type
- Modern frontier models have context windows beyond human working memory, large enough to load and analyze (some) codebases in a single pass.
- They translate across languages seamlessly
So when that module underperforms, the solution becomes iterative:
- Give the model context, tools, and contract
- Generate
- Run tests validating behavior
- Run benchmarks validating performance
- Loop until targets hit
This example is constrained—real codebases rarely decouple this cleanly. But the principle holds: you shift from writing implementations to governing their evolution. From precious artifacts to disposable iterations. From static solutions to continuous dialogue between intent and execution.
The bottleneck becomes iteration speed on validation, not implementation speed. How fast can you build the systems that verify the generated code does what you intended? How accurately can you catch when it drifts from spec?
This changes what matters:
- Clear specifications over clever implementations
- Comprehensive testing over code review
- Behavioral contracts over internal structure
- Intent expressed precisely over syntax mastered deeply
You’re no longer manually translating requirements into code. You’re defining the boundaries within which code can evolve, then verifying it stays within them.
In the fluidware era, the challenge moves from creation to curation. Code becomes raw material, while human judgment becomes the scarce resource that determines what shape it should take.
V A L I D A T I O N
IMPLEMENTATION
VI. When Abstraction Becomes Overhead
The economics that made avoiding code rational are shifting.
Machine code is the lowest-level software representation available to us for instructing hardware. It’s the most direct line we can get in software from intent to execution. Every intermediate layer—higher-level languages, frameworks, configuration schemas, visual builders, proprietary DSLs—adds indirection between intent and execution. When implementation is expensive and human comprehension is the main bottleneck, that indirection is a good trade: these layers save time and reduce risk. As implementation ceases to be a primary bottleneck, those layers will begin to show up as friction—especially whenever your intent falls outside of their designs.
Elegant abstractions are satisfying, but our society, and more succinctly, the modern tech industry, isn’t built on valuing the “art” of code. It’s built on using code as leverage to accomplish human intent. Often driven by economics. Sometimes for good. Sometimes for bad. Sometimes just for fun. But almost always at a pace where the “new hotness” becomes vintage in days, sometimes even minutes.
My point about indirection is this: Code that regenerates fluidly adapts faster than intent locked in predetermined abstractions that must be learned, that constrain what you can express, and that evolve more slowly than the implementation itself.
This brings me to a principle that has always been true: Defer abstraction until it’s clearly beneficial. Avoid (excessive) indirection prophylactically. Let abstractions emerge organically when useful rather than imposing them structurally from the start. This becomes even more important as we start to bring in generative models to write our code… You might like that framework. That platform. That “clean” layer… but will it aide or impede the next generation model in executing your intent?
VII. Performance Through Specificity
Modern software is inefficient. Largely because improvements in hardware have given us so much headroom we haven’t been forced to be concerned about performance. So we often overgeneralize relentlessly because it helps us in other ways—building systems that support every possible use case at the cost of bits & cycles. We accept this because writing specific solutions for specific problems (typically the more performant approach) meant maintaining hundreds of implementations. The cost was prohibitive.
When code becomes cheap to produce, specificity becomes power.
This pattern has played out before. ASICs (Application-Specific Integrated Circuits) revolutionized hardware performance by optimizing circuits for single purposes. General-purpose CPUs are marvels of engineering, but they can’t compete with chips designed for one task. Bitcoin mining, graphics rendering, neural network training—specialized hardware dominates.
But ASICs only became viable after manufacturing costs dropped dramatically. Once you could fabricate chips cheaply enough—even dispose of them when requirements changed—it made economic sense to build for exact needs rather than general-purpose flexibility.
We can imagine a software analogue: Application-Specific Functions (ASFs)—narrowly scoped, purpose-built functions instead of heavily generalized components trying to cover every scenario. History rhymes here: much early software was written this way, forced by hardware constraints and language limitations. As those constraints eased, general-purpose abstractions became the best way to boost (human) developer productivity. Now LLMs give us the power of specificity without the human cost, reopening the space for ASF-style code.
Generative models can write single-purpose code optimized for exact parameters and use cases rapidly. When requirements change, we can regenerate. As the cost of implementation collapses, the argument for over-generalizing starts to as well.
Take a concrete example: the website rendering this essay relies heavily on WebGL shaders for operations that would grind to a halt in CSS and JavaScript. Those are much harder, for me as a human, to write than some alternative in pure CSS & JS. But I didn’t write those shaders. I described what I wanted, and Sonnet 4.5 generated them. If I want to change them tomorrow, I’ll regenerate them.
We’re not fully ready for the extreme end of this spectrum. Code-level abstractions remain useful—for maintaining cognitive understanding, for refactoring across similar patterns, for communicating intent to humans reading the code.
But I think the dial will slowly (maybe quickly) turn. Imagine: human language intent making it all the way to perfect machine code, tuned to the specific use case. Efficiency previously reserved for systems where hand-optimization was economically justified, now available for any system worth describing that clearly.
Architecture provides fixed laws—invariants, protocols, contracts. Implementations become increasingly ephemeral and purpose-built. The internals of a given system can head towards specialization without tremendous human labor involved. We get performance for cheap— though if history tells me anything, the majority of software produced will not use the new leverage for this purpose. But it’s fun to think about.
“DRY” (Don’t Repeat Yourself) gives way to “WET” (Write Everything Twice)—the developer joke about poor practice, now unironically sound.
VIII. Testing as the Durable Asset
In the fluidware era, tests aren’t just validation—they are the codebase.
Think about what persists when implementations become disposable:
- Tests define correct behavior
- Specifications describe intent
- Contracts establish boundaries
- Examples demonstrate expected usage
The implementation code can slowly move towards ephemerality. But the tests capturing what “correct” means—those must be stable & tightly governed.
End-to-end, behavior-oriented, black-box testing becomes essential. Not unit tests validating internal structure—as this paradigm takes hold, that structure becomes fluid. Tests validating the experience and impact directly. The contract is with the intent, not the implementation.
This is why the readiness bottleneck matters so much. Fast iteration requires comprehensive testing. Otherwise you’re flying blind & you will inevitably erode trust. Change something, regenerate, deploy—without testing, you have no idea what broke. And manual, human validation, has no chance of keeping up with the velocity being unlocked on the implementation front.
The organizations winning in this era are those investing heavily in test infrastructure, clear specification writing, and strong validation. Because fast, safe iteration requires it. Testing isn’t overhead slowing you down. It’s the foundation letting you move fast without breaking things.
As generating code becomes trivial, the hard part shifts from writing it to knowing whether it’s right.
IX. Grounding in Practice: Internal Tools First
User-facing software isn’t ready for full fluidity yet. The stakes are too high. The readiness bottleneck is too real.
But internal tooling may well be. Generally, the economics line up. The models are capable enough. The risk is contained. The iteration cycles are already fast. Users are technical and tolerant of change.
Consider release tooling. Teams will often build generalized abstractions (supporting many apps / use-cases) or adopt third-party platforms that, over time, often add more friction than they remove. But with current generation capabilities, you can build custom, tightly coupled release tooling directly in a repository. Tools that align perfectly with your actual processes and code, while evolving as needs change.
On the UI front, interesting concepts like just-in-time composition become practical for internal dashboards, debug menus, admin interfaces. Rather than maintaining static UIs that require deployment for every change, assemble interfaces in realtime, based on your needs, current data structures, and system boundaries. The system composes itself on demand, then dissolves back to components— maybe allowing to save snapshots if a particular UI setup is useful longer term. “Apps” in ChatGPT demonstrate this pattern—interfaces generated from natural language specification at runtime rather than maintained as static artifacts.
For internal tools where users are technical, we can experiment with more fluidity in real-time. In my mind, this is where fluidware thinking can start to prove itself: in unglamorous infrastructure that holds organizations together, where velocity matters more than polish and iteration cycles measure in hours rather than quarters.
X. On “Vibe Coding”
“Vibe coding” emerged to describe non-technical people (or lazy engineers) opening IDEs and producing working code through conversation with AI. The accessibility shift was real enough to deserve naming, but in my mind, the phrase itself is transient—a reaction to threshold-crossing rather than a fundamental category.
Software’s march toward accessibility has been consistent: proprietary military and academic spaces, then classrooms and textbooks, then public forums, then Stack Overflow, then Google making all of it searchable. Now LLMs have digested decades of programming knowledge into systems that convert intent directly into implementation.
I think the phrase will fade. Just as we don’t say “Google coding” or “Stack Overflow coding,” we’ll drop “vibe coding” once AI accelerated development becomes baseline.
“Vibe coding” names the moment of transition. What comes after is simply coding—though the nature of that work will be unrecognizable to those who learned in earlier eras.
XI. Governing Change in an Age of Fluid Code
If you’re a developer who defines yourself by syntax mastery, clever algorithms, or hand-crafted optimizations, this is uncomfortable. Those skills don’t disappear—they become specialist skills. The center of gravity shifts toward specification, architecture, testing, and governance, because that’s what will be needed at scale.
If you’re an organization with years of carefully constructed technical debt and delicate abstraction layers—this era is likely to be disruptive. The systems that made sense when code was expensive increasingly look like overhead when code is cheap.
If you’re a user of software that can now change radically overnight—this is destabilizing. Trust becomes harder to establish and easier to destroy.
To me, this momentum feels clear. Code generation costs are falling, while producing ever better results. Context windows continue to expand. Models are getting better at reasoning about code, at understanding intent, at maintaining consistency across changes.
The organizations that thrive in the next decade will be those that learned to govern change faster than their competitors learned to generate code.
This means:
- Investing in testing infrastructure now, not later
- Defining clear contracts and invariants everywhere
- Training teams to think in specifications, not implementations
- Building organizational capacity to absorb change
- Protecting user trust while everything beneath stays fluid
The old disciplines transform:
- We still need rigor—in defining intent rather than writing syntax
- We still need architecture—as protocol and invariant rather than module organization
- We still need craft—in governance and curation rather than implementation
The code becomes fluid. The principles must not.
In this age of abundant code, scarcity moves to judgment. The engineering challenge becomes knowing what to build, what to protect, and what to let dissolve back into the generative substrate from which it came.