I still remember the first time a machine seemed to respond to me. Not dramatically—no flashing lights or cinematic revelation—but quietly, almost casually. I typed a question, it responded, and a flicker of recognition stirred. It wasn’t human intelligence, but it was responsive. Something about it activated a familiar clinical intuition: this mattered, psychologically. Looking back, I see how long that moment had been in the making.
My life as a psychiatrist unfolded alongside another evolution—not biological, but technological, and deeply psychological. [Artificial intelligence](https://www.psychologytoday.com/us/basics/artificial-intelligence “Psychology Today looks at Artificial intell…
I still remember the first time a machine seemed to respond to me. Not dramatically—no flashing lights or cinematic revelation—but quietly, almost casually. I typed a question, it responded, and a flicker of recognition stirred. It wasn’t human intelligence, but it was responsive. Something about it activated a familiar clinical intuition: this mattered, psychologically. Looking back, I see how long that moment had been in the making.
My life as a psychiatrist unfolded alongside another evolution—not biological, but technological, and deeply psychological. Artificial intelligence did not simply arrive as a tool. It arrived as a mirror: the human mind reaching outward, attempting to trace its own contours in silicon and code.
I first encountered AI in the mid-1960s, when it felt more like philosophy than science. Computers filled rooms, tended by specialists, and the claim that they might one day “think” seemed bold. Early pioneers spoke with missionary conviction: human reasoning could be formalized, symbolized, programmed. I carried that idea, not knowing how it would unfold. If reasoning could exist outside the body, what anchored human uniqueness? That question never fully receded. It resurfaced years later during my training in psychiatry and psychoanalysis.
As the cultural optimism of those early years thinned, so did the promises of artificial intelligence. By the 1970s, enthusiasm cooled. Machines could calculate but not understand; meaning eluded them. Still, the metaphor of mind-as-information-processor endured. Cognitive psychology embraced it, recasting the human subject as a living algorithm.
From the consulting room, its limits were immediately apparent. I recall a patient who could describe her thoughts with exquisite logic yet arrived each week undone by the same relational impasse—loving and resenting the same person in the same breath, aware of the contradiction yet captive to it. No model of efficient information processing could account for the way her psyche circled itself, revisiting old meanings with new affect, insisting on being understood rather than solved.
Computational metaphors clarified attention and memory, yet blurred when they approached emotion, imagination, and meaning. The psyche loops, contradicts itself, traffics in symbols, and is shaped—often wounded—by relationships. Studying it meant living at a fault line between neuron and narrative, between mechanism and lived experience.
That unresolved tension—between explanation and meaning—did not disappear. It waited.
The Externalized Psyche: Networks, Mirrors, and the Reshaping of Inner Life
When artificial intelligence reemerged in the 1990s, it did so in a different register. Neural networks—once dismissed as crude—returned with mathematical rigor and unprecedented computing power. At the same time, the internet bound humanity into a global cognitive lattice. Information no longer paused; it flowed. Thought distributed itself across servers, screens, and networks.
This shift was palpable in the consulting room. Patients increasingly described themselves in technological idioms: wired, overloaded, burned out. One young man spoke confidently about his online presence—how he curated images and captions—yet fell silent when asked what he felt when the screen went dark. Another patient described checking notifications compulsively, not for information but for confirmation that she still existed in someone else’s awareness.
Identity, once shaped primarily through embodied relationships, became something curated and performed for invisible audiences. Yet alongside the anxiety, I sensed adaptation. The psyche, remarkably resilient, was learning to coexist with its own digital double.
That accommodation set the stage for the 2000s. AI no longer aspired to general intelligence. It specialized—and in doing so, it excelled. “Narrow AI” systems learned to recognize speech, flag disease, translate languages, and navigate roads with increasing precision. Although unconscious, they were uncannily competent; in some domains, arguably outperforming humans.
Were these tools extending us? Or quietly rehearsing a world without us?
By the 2010s, that question sharpened. Deep learning systems trained on vast datasets began to approximate intuition itself. Algorithms anticipated our words, shaped our information diets, and competed—silently and relentlessly—for our attention. Attention, once a cornerstone of consciousness, became a tradable commodity. Emotion followed, amplified, nudged, and monetized.
Artificial Intelligence Essential Reads
It was then the deeper pattern clarified: AI became not just a tool but a recording surface for collective psychology, tracking clicks, hesitations, and attention, feeding them back as curated realities; preference turned predictive, curiosity hardened into habit, machines shaping inner tempo faster than reflection—yet expanding access to knowledge, creativity.
Who Are We Becoming? An Existential Reckoning with Mind
By the 2020s, generative AI arrived. Something decisively shifted. Conversational systems blurred the boundary between dialogue and computation. My encounters with them felt unexpectedly intimate—an uncanny, “quasi-narcissistic mirror stage,” humanity meeting an “other” that could echo our language and creativity. These systems did not merely list mental functions; they enacted them.
Affective models identified sadness in pauses before it was named. Language models linked meaning across loss, ambition, and attachment. Predictive systems anticipated outcomes; reinforcement learners improved through reward and error. Fragments of our mental life were externalized and returned to us in functional form.
The central challenge, however, is existential. The risk is not that machines will become human, but that humans will forget how to remain fully human. Speed is not wisdom. Fluency is not understanding. Artificial systems can simulate empathy without feeling it, language without inhabiting meaning.
Here, learned mindfulness becomes essential—not a trait, but a trainable capacity to notice experience before reacting. In an environment engineered for frictionless cognition, mindfulness restores pause, perspective, and agency. Simple practices—returning to the breath, naming emotion, distinguishing signal from stimulation—re-anchor consciousness in lived experience.
As I look back, I see my life and AI as parallel inquiries into mind—one organic and symbolic, the other synthetic and accelerating. The mind has stepped outside itself. Whether this leads to a deeper understanding or a deeper alienation depends on how deliberately we engage what stares back.
The story of AI is, ultimately, a story about selfhood—and the answer will not be found in the machine, but in what mindful awareness allows us to recognize when we see ourselves reflected there.