For a while now, I’ve been exploring a set of related observations about what artificial intelligence seems to be doing to human thinking. I’ve written about what I call anti-intelligence, the growing gap between fluency and comprehension. I’ve described borrowed certainty, the ease with which we absorb confidence we did not earn. I’ve even argu…
For a while now, I’ve been exploring a set of related observations about what artificial intelligence seems to be doing to human thinking. I’ve written about what I call anti-intelligence, the growing gap between fluency and comprehension. I’ve described borrowed certainty, the ease with which we absorb confidence we did not earn. I’ve even argued that human and AI cognition operate on orthogonal axes, not as competitors on a single line, but as fundamentally different modes of knowing.
Each of these ideas captured something that feels real. Yet to me, they still languish in both complexity and ambiguity. I kept wondering about a deeper pattern underneath them, maybe even a unifying mechanism.
Then something struck me. It’s based on Einstein’s perspective on gravity and how his theory changed the understanding of gravitational force itself. My thesis is that AI isn’t simply adding new cognitive tools to our lives. It is actually reshaping the conditions under which thinking itself takes place. It is bending the terra cognita—the cognitive ground on which understanding forms.
Losing Our Cognitive Friction
Think about what it takes to genuinely understand something complex. You encounter conflicting explanations, and you struggle with confusion longer than you’d like. You try to articulate what you know and discover where it falls apart. You revise your thinking, commonly more than once. The process is a curious constellation of difficulty, struggle, and even joy.
And this friction isn’t merely incidental—it’s how cognitive depth forms.
Today, for its users, AI collapses much of that human cost. A question is asked and an appropriate answer appears almost instantly. There’s no real struggle or contradiction to wrangle with. And from the user’s perspective, this may feel like progress. Do we really need to tolerate uncertainty when clarity appears with the click of a button?
But when this cognitive friction disappears, something begins to shift. The paths through thinking that once required effort and struggle just don’t feel necessary. Accepting the fluent answer becomes the default, and moving on feels reasonable.
It’s key to recognize that this isn’t necessarily laziness in the traditional sense. It’s cognition adapting to a changed environment.
The Geometry Beneath Judgment
This shift helps explain why several patterns I’ve written about tend to appear together. We are very attuned to linguistic coherence as a signal of understanding. When something "sounds right," it feels grasped. AI can produce this feeling of coherence at scale, but it’s detached from the lived processes that once gave it weight.
Confidence now follows expression rather than careful evaluation. Conclusions arrive quickly, and judgment adjusts to the new pace. Human and AI cognition still operate in different geometries, though—one shaped by causality and uncertainty, the other by statistical computation. The trouble starts when we try to use the map of one space to navigate the other.
Curvature, Not Force
This is where the idea of curvature becomes fascinating and relevant to me.
In modern physics, gravity is no longer understood as a force pulling objects together. It is the bending of spacetime itself. Objects follow the straightest available paths through that curved space. From the inside, it feels like force, but from the outside, it is geometry.
It’s my contention that AI functions in a similar way within cognition. It doesn’t persuade or compel us. Instead, it reshapes the environment in which reasoning happens. And what changes isn’t intent, but ease.
- Certain conclusions become easier to reach, simply because they are already well-formed.
- Some explanations feel more natural, because they arrive fluent and complete and not well thought-out.
- Resolution comes sooner and follow the the path of least cognitive resistance.
To me, nothing here feels coercive. From the inside, it still feels like choice—but choice, in a reshaped environment, can follow the contours of this terrain.
Detecting the Bend
This helps explain why maintaining distance between human and AI perspectives matters so much.
I’ve previously described this as cognitive parallax, the depth perception that emerges when a problem is viewed from fundamentally different cognitive positions. Parallax matters because it allows us to notice curvature at all. When you are moving through bent space, everything feels normal. Only by reintroducing friction or contradiction do you sense that the cognitive terrain has changed.
This is why sustained, independent thinking can feel effortful after extended AI use. You’re not getting less capable, but working uphill in a landscape that no longer demands that effort by default. Meanwhile, the path to what feels “good enough” runs smoothly downhill, often disguised as genuine understanding.
The Cognitive Reorientation
What makes this shift especially powerful is how it unfolds. There’s no moment when we consciously decide to trust techno-fluency over struggle. The environment simply makes certain cognitive moves easier than others, and human cognition adapts accordingly. Over time, the effects become visible. A familiar triad begins to emerge.
- Writing grows fluent but increasingly interchangeable, polished in form yet less individualized.
- Decisions feel confident even as the depth of analysis decreases.
- Creative work appears finished without passing through genuine human exploration and imagination.
None of this feels like failure from the inside—it feels like efficiency. And that is precisely what makes the shift so difficult to notice. People who frequently use AI are not losing the capacity for deep thought but adapting to conditions where deep thought requires swimming against a current that did not previously exist.
Choosing How We Adapt
Seeing this shift as geometric (math) rather than personal (human) changes the conversation. The issue isn’t willpower or discipline but about recognizing that the cognitive terrain has changed, and that what feels natural may no longer lead where we think it does.
That recognition points to a different response. And it’s not restraint, but a sort of counter-curvature. We need to reintroduce friction where it has diminished or even vanished, slow the rate of acceptance, and make uncertainty visible again. In essence, creating conditions where inquiry is rewarded, not bypassed.
The curvature is already at work, and it’s shaping how many of us think. And we’re adapting whether we notice it or not. But can we adapt consciously—preserving forms of thought that still require resistance—or just continue along the smoothest paths, wherever that might take us.