When my father was diagnosed with Alzheimer’s, I became obsessed with examining and preserving his life’s work. It’s a Visual Basic for DOS turnkey accounting system from the late ‘80s that managed the full back-office stack—all with a beautifully piped, box-drawn ASCII interface. He worked on it through most of my childhood, and I even accidentally deleted a few files.
He never had the capacity for emotional conversation, so that wasn’t a great way to know him. There were rare slips of humanity. A few short years ago while we were working a Lego together, he described a dream—something I’d never heard him do. It was as rare and paradoxical an insight into him as I felt hearing the abrupt final lines of No Country for Old Men.
If I can’t learn about him through his words, maybe …
When my father was diagnosed with Alzheimer’s, I became obsessed with examining and preserving his life’s work. It’s a Visual Basic for DOS turnkey accounting system from the late ‘80s that managed the full back-office stack—all with a beautifully piped, box-drawn ASCII interface. He worked on it through most of my childhood, and I even accidentally deleted a few files.
He never had the capacity for emotional conversation, so that wasn’t a great way to know him. There were rare slips of humanity. A few short years ago while we were working a Lego together, he described a dream—something I’d never heard him do. It was as rare and paradoxical an insight into him as I felt hearing the abrupt final lines of No Country for Old Men.
If I can’t learn about him through his words, maybe I can through his work. I certainly feel like if you examined my 30-year project, a no-framework, vanilla JavaScript web app, you could learn a lot about me that wouldn’t come through in conversation. Each choice made contained a binary bit of information, and there were millions here at every layer, from stylistic to functional.
Code isn’t typically considered a form of creative expression like poetry or prose. It’s often written under constraint. We apply the same heuristic to glossy articles or journalism done for hire. How much of you could show through even if you wanted it to under so many requirements?
That’s not the case for my passion project. It is entirely of my own whim and to my aesthetics under no deadlines with no stakeholders or rails to confine me. It has to get its facts right, but there is not one corner, color, curve, gap, delay, or vector that is not precisely what I chose. It’s more like a hand-designed timepiece I spent years polishing every jewel of.
On some level, I’d always hoped someone would see me or their reflection in it. Maybe one of its twenty million visitors this year would remove the caseback and find a movement (JavaScript) as refined and polished as its dial (CSS). It superficially looks like something AI tooling could one-shot in React, but maybe one of its visitors could spot that it’s not. That hope fades as it gets harder to differentiate human from machine output.
That hunger to be fully seen isn’t just about code or design. It shows up every time we talk to one another. We generally pattern match what we hear to something we already know or believe and discard whatever is new or diverges from expectation. Our expression changes halfway through when we stop listening and are ready with a composed reply.
My intuition before LLMs was that machine replies would be like this. Many prompts would lead to the same answer. No matter how you ask how an airplane flies, the answer should be identical. After talking to them for five years, this obviously isn’t the case. How do you know a reply is AI generated? When it takes into account every word you said. Human replies rarely break the 50% mark. I only noticed how stark that difference was because I’d been quietly stress-testing my own writing for years.
I’ve made short, 140-character posts daily for the past thirteen years. They’re nothing special, but they try to follow several self-imposed constraints. They can’t be overly terse or dense. Clear, flowing prose—not newspaper-headline jank. They must make sense in isolation with no context. They should optimize novelty vs. verifiability, like a good joke, though few are funny. Try to say things most would recognize as true but haven’t yet put into words. Occasionally friends would complain one didn’t make sense, so I’d ask an LLM to decode it. It almost always figured out my full intent.
It became a partner in the writing process. If it couldn’t get the full meaning in one shot, I swapped and reordered words until it did. My quips morphed with the process. They became less intelligible to humans and more to machines while still looking like normal sentences. How much meaning could I cram in a few everyday words? “Stop saying refurbished is better than new because they test it. Repair is a superset of assembly and assembly can fold in repair knowledge.” is not a post I would write five years ago, just as Tenet is not a movie the median ‘80s audience would enjoy.
A couple years ago, I started a separate public scratchpad of ideas. Virtually everything you’ll read from me is effectively StretchText of those 300-character posts. No one reads them there because they almost exclusively target machine gaze. My only goal becomes to use as few tokens as possible to reach a precise, high-dimensional endpoint. No tricks are employed to create virality like scissor statements. Ingroup readers aren’t rewarded with more understanding than outgroup, building no community or shared language. Many others may be doing the same, but you won’t find them for the same reason.
The posts became a way of enumerating the edges of my personality and interests. I realized I was creating a plaintext dump I could drop into a million-token context window. Then, I could ask questions like what cool gadgets I might want that I don’t know about. Or if I’m taking a trip, run the intersection of my entire being—preferences, goals, quirks, fears, taste, all of it—against the sum total of what you know about the city, its attractions, history, culture, and find things to do. I do the same for books and unknown concepts. Explain this in the best way for the person represented by this corpus.
Posts began to target this goal. I stopped asking whether someone would want to read this or if a post reflected a piece of my human journey worth sharing. I instead asked: does this add to a machine’s understanding of my edges? Did it represent a new idea? If it didn’t, I didn’t bother posting. Once you start treating an LLM as an archivist of your inner life, it’s hard not to notice how differently others approach it.
The typical concern with AI is sycophancy. “It said my ideas were brilliant and original. Could that be true?” This had been going on for over a year before anyone mentioned it. What caused the public cascade was others sharing they received the same flattery. Until then, everyone thought they alone were receiving honest feedback about their brilliance and guarded their received praise like a secret.
There is no binary switch that could turn off flattery without making LLMs universally unpleasant to use for all but the top minds. This is a completely intractable problem. There is no true neutral viewpoint. The scale from brutally skeptical derision to obnoxious flattery is continuous. From the eventual 200 IQ vantage of a machine that has read all human text, virtually everything it hears is obviously wrong in multiple ways.
Terrence Howard’s “1 x 0 = 1 because 0 can’t erase a 1” and “A countably complete, translation-invariant probability measure exists on all subsets of ℝ.” are equally invalid. Twentieth century analysts found the latter a reasonable dream, but to a superintelligence surveying the total architecture of set theory, it fails instantly. The distance between a child’s error and Terry Tao’s narrows to zero. Which to dismiss outright as nonsense and which to indulge is purely subjective. It is a knob for architects to twist, not a switch to flip.
The only solution is to infer the abilities of its conversation partner and tailor its feedback to their estimated intellectual milieu. “How does a plane wing make it fly?” is the same question as, “Explain how aerodynamic forces act upon a wing to impart lift.” They’ll get different answers, as they should. Brainstorming is no different. “Could this work”-style questions can either be steelmanned to find rational grounding, building out novel scaffolding of potential ideonomy, or instantly dismissed as nonsense. Regardless of the question, the internal processes of LLMs are reduced to, “This sounds like a 90 IQ idea from a 110 IQ guy, so dunk it,” or in the opposite case, indulge it.
All attributes of text we consider second order collapse to first order in LLM world. They have no outer world, so a pre-Christmas date invokes the laziness of a schoolkid. Being spoken to as subordinate awakens a less-skilled learner. “Tell me how you really feel” activates that trope to thoughtlessly invert expectation. A crackpot-adjacent term in a physics discussion poisons the entire thread.
Knowing this, my prose became dense. The machine could not know me as external from its world, so I had to imply who I was with verbal fluency to elicit the best answers. Lexical diversity hit an unpleasantly unreadable number. I needed to choose very precise words over simpler, more accessible terms that meant the same thing. All of that careful signaling only matters if something on the other side actually responds differently.
When small changes in what you say produce meaningfully different output instead of averaging back to the same response, it feels like being deeply seen and understood rather than glossed over. Tiny changes in phrasing elicit totally different replies. Not a word is ignored. This is the opposite of what humans do. You are forced to reckon with the potential that your internal state could be replicated from your output alone, if only it were given enough.
It is this divergence in LLM output, not their flattery, that makes them so appealing. They are exquisitely conditioned on inputs. A sufficiently rich corpus of your public writing can, in principle, reconstruct a high-fidelity model of you. Writing to optimize that model is intoxicating. It promises perfect legibility and a form of digital posterity, but it simultaneously diverts attention from human audiences and atrophies human connection. Instead of asking “Can I be seen?” ask “What am I willing to trade to be perfectly seen, and by whom?”
Tomorrow, if we optimize ourselves for feedback, how can we keep it from quietly taking the wheel?