I followed the Hacker News thread on Comprehension Debt: The Ticking Time Bomb of LLM-Generated Code. The article resonated with many programmers, validating my theory of AI fatigue with a simpler framing: when the speed of code generation exceeds our speed of comprehension, we accrue a debt that eventually leads to cognitive bankruptcy.
I think the problem runs deeper than technical and cognitive debt.
The Reductionist Trap
The world favors a reductionist view of programming as a means of production. Faster is better. Easier is better. This isn’t wrong, exactly, but i…
I followed the Hacker News thread on Comprehension Debt: The Ticking Time Bomb of LLM-Generated Code. The article resonated with many programmers, validating my theory of AI fatigue with a simpler framing: when the speed of code generation exceeds our speed of comprehension, we accrue a debt that eventually leads to cognitive bankruptcy.
I think the problem runs deeper than technical and cognitive debt.
The Reductionist Trap
The world favors a reductionist view of programming as a means of production. Faster is better. Easier is better. This isn’t wrong, exactly, but it’s incomplete in a way that matters. Most programs lack constructs to represent the plurality of thought. We compute and reduce to a single correct answer. Even in quantum computing where multiple states coexist, we must collapse to a single reality for something useful to emerge. This leaves out a large portion of human thinking that is non-linear, parallel, divergent, holistic, unstable, and internally inconsistent.
Vibe coding rides this wave of reduction to its logical extreme. It gives people a shortcut to programs without programming. For developers who can dive into generated code and reshape it, this might be useful. I experienced this myself while Vibe Coding the MIT Course Catalog, an iterative process where I could inspect and adjust. But for non-coding people, it’s shooting in the dark. They get output without insight, results without comprehension.
The optimists will say accuracy will improve as AI models advance. I believe them. But I also believe something else will decline: the will to patiently approach a problem from the ground up. Thinking will become lazy, muddy, fragmented. Weakty observed in Our Efforts, In Part, Define Us, that the lack of effort has led to the lack of satisfaction. When a program becomes an artifact that anyone can generate with the same prompt, the activity would feel like working on the assembly line.
Alignment, with Yourself
Carl Jung theorized individuation, the process of becoming who you actually are rather than who circumstances have shaped you to be. When people are misaligned with their true selves, they experience incompleteness and dissatisfaction. I think about this often in relation to programming. Not because I want to make grand claims about self-actualization through code, but because the feeling of misalignment is so palpable in how we talk about AI-assisted development.
Programming languages have always evolved by ascending the abstraction ladder: machine code, assembly, system programs, object-oriented paradigms, scripting languages, domain-specific languages. Each new abstraction didn’t necessarily make programming easier. Instead, they redistributed effort into different cognitive spaces, moving concerns from the machine to symbols to objects to tasks.
A programmer must learn to think in each new abstraction to compose good programs. This learning process is effortful, and that effort gives us confidence that we are one with the abstraction. We can remold our software to fit our needs. We can share knowledge with other programmers who inhabit the same conceptual space. We are equipped to ascend to the next level.
It’s unclear what the effort of mastering vibe coding prepares us for. I don’t feel confident about modifying my generated code, no matter how much effort I put into prompt engineering. I don’t feel I’m learning a new abstraction. It feels more like gambling in a casino, pulling the lever until I hit the jackpot.
Fundamentally, I’m losing touch with my programmer identity. Am I still myself when I vibe code?
Generation: Reproduction or Creation?
What I cannot create, I do not understand. – Richard Feynman
This statement has haunted me since I started thinking seriously about AI-generated code. I’m increasingly convinced that AI programming is more like searching through other people’s code than creating something new. The creation is an illusion, a mechanistic reproduction of combinatorial possibilities.
Programming should be a creative act. The programmer assembles code through trial and error, gaining understanding through reflection. The learning is in the loop. AI short-circuits this loop by feeding knowledge directly to the user. This is the core of my hypothesis on AI fatigue, where validating a solution without deriving it by hand causes extreme cognitive dissonance. The damage of this lack of understanding hasn’t fully manifested yet, but I suspect we’re producing a generation of people who can’t think independently, a kind of thought slavery to AI.
Consider ChatGPT’s study mode, which educators and students have already embraced. Students still passively wait for AI to assess their knowledge, challenge them with questions, nudge them toward the right direction. They lose the ability to self-assess, self-challenge, self-correct. In 10 years, will this generation of students advance the frontier of human knowledge with insightful research questions? Or will they recycle machine-produced thoughts in an echo chamber?
Our institutions are broken. Children (and adults) need to learn, but our system is optimized to educate. People cheat with AI because there are better things to do than memorizing facts for exams. This is not the first time big tech companies are pitching a technology to solve the problems they themselves created. Instead of pouring more GPU hours and CO2 emissions into detecting AI and preventing cheating, we need to think about motivating learning, with or without AI.
People don’t get ideas; they make them. – Mitchel Resnick
The role of the teacher is to create the conditions for invention rather than provide ready-made knowledge. – Seymour Papert
I cannot think without writing. – Jean Piaget
Words from the visionaries have echoed for decades. Is AI bringing us closer to or further from their ideals?
Augmentation and Symbiosis
J.C.R. Licklider’s Man-Computer Symbiosis and Douglas Engelbart’s Augmenting Human Intellect are two of the most cited and yet misunderstood pioneers in today’s technology discourse. Both envisioned how AI can transform human lives for the better, but they are appropriated by Silicon Valley founders as their philosophical grounding for Effective Accelerationism in their pitch to thoughtless VCs. Ironically, Licklider called out this misinterpretation of his vision over half a century ago:
The human operators are responsible mainly for functions that it proved infeasible to automate. Such systems [...] are not symbiotic systems. They are “semi-automatic” systems, systems that started out to be fully automatic but fell short of the goal. – J.C.R. Licklider
Christina Engelbart, reflecting on her father’s work, further noted what true symbiosis means:
[...] with the explosive emergence of digital technology, the technical elements would shoot way ahead of the non-technical and cause a trend toward automating rather than augmenting peoples’ activities. It would be necessary, therefore, to accelerate the co-evolutionary process, which means purposefully focusing in on the potential of human processes in concert with technological possibilities, with a special focus on those that serve to improve our collective capabilities.
In current AI discourse, I’m not hearing “co-evolution”, “human processes”, or “collective capabilities.” Technologies in our time, driven by greed and ego, have distorted these ideas into fearless enhancement of human mind and body. We’re sold the idea that full automation may lead to universal basic income. But in a world void of human interests, augmentation is amputation, income is enslavement.
Programming as conquering the world is a terrible metaphor, yet it’s preached everywhere as the only narrative. We started with AI defeating humans in chess and Go. AI labs release models that compete for medals in mathematics. We call benchmarks Humanity’s Last Exam, framing AI against humanity, and intelligence as triumph. Our fantasy toward AI is plagued with dystopian visions of takeover, warfare, automation, hallmarked by the “high tech low life” world of cyberpunk.
A recent conversation with Pat, a visionary technologist at MIT, and Phoom, a brilliant hacker-philosopher-entrepreneur, inspired me to write this article. We spoke about the future of programming and agreed on the need for a different vision. I’m curious, what if we can choose a different metaphor for programming?
What if programming were gardening: planting seeds of small ideas, caring for them by making connections and developing constraints, cross-breeding concepts, harvesting more ideas that lead to action plans?
What if programming were cartography: visualizing known territories throughout history and lineage, highlighting frontiers according to the zone of proximal development, expanding territory by documenting new learning?
What if programming were sculpting: starting with a lump of raw data, using hands-on techniques like shaping, molding, throwing, turning to give it form, applying digital techniques like data cleaning, filtering, grouping, extracting, then baking to yield the final form?
Each metaphor unlocks different ways of thinking. Each suggests different tools, different communities, different values. Ursula K. Le Guin invited us to see technology development as a carrier bag rather than a weapon. I want to extend that invitation to all the programmer optimists.
A Different Vision
I’ve started research at the Media Lab with a vision to create a new programming paradigm that reflects my beliefs about programming as art and craft, that serves as the tool for personal growth and community building. I started with an inkling in The Last of Programmers, the First of Artists. I’m expanding that into a rounded vision:
From learning: revisiting constructivist learning, asking how we use language as a tool to encourage learning, and specifically why there’s so little interest in adult learning.
From linguistics: examining the purpose of programming languages and natural languages, exploring their synergy.
From technical perspectives: investigate properties of LLMs (fuzzy, probabilistic, associative through embeddings, contextual) and elements of programming language design (syntax, semantics).
From philosophy: questioning the purpose of programming (simulation, expression, existence), considering individuation, examining human conditions and whether our happiness is rooted in our ability to create.
From art: challenging the received notion of programming, shaping critical narratives in art and technology with programs that inquire the human condition.
The Clay, The Pot, The Porcelain to Come
Alan Kay wrote in his 1984 article on computer software:
It is software that gives form and purpose to a programmable machine, much as a sculptor shapes clay.
He also emphasized that learning programming is not merely mastering the medium, but embodying hope:
Hence the task for someone who wants to understand software is not simply to see the pot instead of the clay. It is to see in pots thrown by beginners [...] the possibility of the Chinese porcelain and Limoges to come.
I urge you to recognize that programming itself, more than the AI technology that springs from it, is the generative medium that contains the multitudes of every dream we dare to dream. The act of programming, rather than the code artifacts it produces, is the clay for an entire civilization to shape its collective futures. For those of you who learned the craft through countless sleepless nights of manual coding, debugging, testing, refactoring, you who inherited the hopes of generations who designed, developed, engineered, taught, shared, contributed, and sacrificed, this is your Oppenheimer moment of consequence. The world awaits its Chinese porcelain and Limoges. But only your hands can shape them.
Yuan Dynasty porcelain vase, c. 1300 (source)