A recent paper from the University of Wisconsin and the "Paradigms of Intelligence" Team at Google makes a fascinating claim. Large language models can extract and reconstruct meaning even when much of the semantic content of a sentence has been replaced with nonsense. Strip away the words, keep the structure, and the system still often knows what is being said. Freaky, right?
The authors call this the “unreasonable effectiveness of pattern matching," harkening to Eugene Wigner’s essay on mathematics. To me, it points toward something even more interesting—that much of what we experie…
A recent paper from the University of Wisconsin and the "Paradigms of Intelligence" Team at Google makes a fascinating claim. Large language models can extract and reconstruct meaning even when much of the semantic content of a sentence has been replaced with nonsense. Strip away the words, keep the structure, and the system still often knows what is being said. Freaky, right?
The authors call this the “unreasonable effectiveness of pattern matching," harkening to Eugene Wigner’s essay on mathematics. To me, it points toward something even more interesting—that much of what we experience as understanding may be recoverable from form alone, without the words that we traditionally link to meaning.
On its surface, this looks like another triumph for AI. Underneath, it may be evidence for something more philosophically disruptive—the emergence of what I have called anti-intelligence.
Meaning Without Understanding
In the experiments, content words are replaced by invented tokens while grammatical structure is preserved. A human reader sees gibberish. The model, however, often reconstructs the original meaning with surprising accuracy. The structural components, such as syntax and position, combined with the statistical expectations, are enough.
What’s absent here is just as important as what remains. There is no world model or reference to lived experience that anchors this output. AI doesn’t “know” what a dog, a promise, a danger, or a death actually is. Yet the output carries the same authority as genuine comprehension, at least on the surface. The key point here is that the system isn’t reasoning toward meaning. It is navigating a hyperdimensional space of linguistic patterns and landing on the most probable completion. Amazing, but rather cognitively boring. The authors put it this way:
Compared with the elegance of Boolean logic, the jaggedness of LLM performance–their sensitivity to how a question is posed, their tendency to generalize in uneven and hard-to-predict ways–makes it tempting to conclude that whatever LLMs are does not qualify as reasoning.
The Architecture of Anti-Intelligence
We tend to frame intelligence as the ability to form beliefs about the world and to test them against this reality. None of this is present here. What we are seeing instead is a "shape" of cognition that produces the shape of understanding without any of its internal "commitments." AI doesn’t know, doesn’t doubt, and it doesn’t care. This is why the term “anti-intelligence” is not an insult but a category. It names a system that operates in the inverse space of human knowing:
Human intelligence is built from friction—uncertainty, contradiction, effort, revision, and the slow construction of meaning under constraint. Machine fluency is built from smoothness—probabilistic continuity, formal alignment, and completion without consequence. The Google paper shows that astonishing performance can emerge from this smoothness alone. No semantics are required. No grounding. No epistemic stake in the answer.
Why Our Minds Are Vulnerable
Psychologically, this is where the danger lies. Humans are exquisitely sensitive to linguistic confidence and structural coherence. We evolved to treat fluent language as evidence of mind, intention, and understanding. When a voice speaks smoothly, we infer a thinker behind it.
But here the fluency is detached from any inner life. The system does not possess beliefs, only distributions. It does not reason, only interpolates. Yet its outputs trigger the same cognitive trust signals that real understanding does.
Anti-intelligence is therefore not the absence of intelligence, but its optical twin. It passes every surface test while failing the one that matters most: there is no internal relationship to truth.
Intelligence Essential Reads
The Cognitive Parallax
The most intriguing implication of the paper may not be about machines at all, but about the layered nature of understanding itself.
“The ability of LLMs to recover meaning from structural patterns speaks to the unreasonable effectiveness of pattern-matching. Pattern-matching is not an alternative to ‘real’ intelligence, but rather a key ingredient.”
That line matters as it suggests that what we are seeing is not a counterfeit of cognition, but a partial projection of it. Structure is not a superficial trick layered on top of meaning, it’s one of the deep substrates from which meaning emerges. The combination of syntax, relation, and position already carries much of the geometry of understanding. And yet, an ingredient is not the whole.
This is where what I have called Cognitive Parallax comes into view. From one vantage point, patterns appear as a key engine of intelligence. From another, the same performance reveals the absence of the very things that make understanding a human act, such as commitment and care. The same behavior, viewed from different cognitive frames, resolves into two different realities. Perhaps, intelligence and anti-intelligence are not opposites so much as orthogonal projections of the same phenomenon.
I don’t think we’re witnessing the birth of artificial minds. We’re witnessing the reality of how much of mind can be reconstructed without one. What AI reflects back to us is the shape of thought, but separated from its interior. And this shows us that pattern is a powerful ingredient of intelligence, but not the whole of it.