One of the strangest things about large language models is not what they get wrong, but what they assume to be correct. LLMs behave as if every question already has an answer. It’s as if reality itself is always a kind of crossword puzzle. The clues may be hard, the grid may be vast and complex, but the solution is presumed to exist. Somewhere, just waiting to be filled in.
When you ask a large language model something, it doesn’t encounter an open unknown. It encounters an incomplete pattern. Its job isn’t to ponder this uncertainty, but to complete the shape. It moves forward because that’s the only thing it knows how to do.
Completion Is Not the Same as Knowing
Humans experience not-knowing very differently. We experience, if not wallow, in hesitation and doubt. We fe…
One of the strangest things about large language models is not what they get wrong, but what they assume to be correct. LLMs behave as if every question already has an answer. It’s as if reality itself is always a kind of crossword puzzle. The clues may be hard, the grid may be vast and complex, but the solution is presumed to exist. Somewhere, just waiting to be filled in.
When you ask a large language model something, it doesn’t encounter an open unknown. It encounters an incomplete pattern. Its job isn’t to ponder this uncertainty, but to complete the shape. It moves forward because that’s the only thing it knows how to do.
Completion Is Not the Same as Knowing
Humans experience not-knowing very differently. We experience, if not wallow, in hesitation and doubt. We feel the gap between what we understand and what we don’t. Sometimes we live inside questions that may never close or, in a Zen koan sort of way, may never have completion as their point at all. That’s not a flaw.
A language model can’t really inhabit that space. It can only interpolate across it, looking for an eight-letter word AI.
When an LLM gives a confident answer that turns out to be wrong, it isn’t lying in the human sense. It is acting out its core assumption that an answer must exist, that every prompt points to a fillable blank, and that the shape of an answer implies the reality of one. The essential observation is that LLMs mistake the form of completion for the existence of truth.
Language Without I Don’t Know
Pushing on this a bit more. The deeper issue is that an LLM has no way to represent the difference between two very different human states, including the possibility that an answer exists but is not yet known, and the possibility that no answer exists at all. For a human mind, those are worlds apart as one invites search and the other invites humility and sometimes awe. One is a problem, while the other is a condition of existence.
As we increasingly rely on systems built on that assumption, something subtle begins to shift. Questions start to feel like retrieval tasks. Uncertainty begins to feel like a temporary technical glitch, something that should resolve if the model is just powerful enough. But what happens to the category of the unanswerable? Does it quietly fade from our cognitive landscape?
The Human Counter-Move
This is why the presupposition of completion may be AI’s tragic flaw. But what makes the flaw tragic is that it isn’t alien. It reflects a very human pull toward resolution. We reach for answers before we are ready for understanding. We prefer explanations that settle and resolve questions to those that stay open. In that sense, AI isn’t so different from us. It carries our own bias toward completion, but without the brakes that come from being human—no hesitation or capacity to be held by mystery. It moves straight to the fill-in, without the pause that sometimes lets wonder do its work.
References
A Comprehensive Overview of Large Language Models. Presented at the Association for Computational Linguistics 2023. H. Naveed et al.