Ray Kurzweil (2005) and others have described “the singularity” as the moment when artificial intelligence (AI) surpasses human intelligence and triggers fulminant technological change. We’re not at the full singularity yet—when AI “wakes up,” when the world radically transforms. Or maybe we are, and we just don’t know it yet. Regardless, the shape of things to come is coming into focus.
Recontextualizing the Human Psyche
AI is a form of viral intelligence, a fast vector that many humans find fascinating and irresistible. AI is already impacting culture and re…
Ray Kurzweil (2005) and others have described “the singularity” as the moment when artificial intelligence (AI) surpasses human intelligence and triggers fulminant technological change. We’re not at the full singularity yet—when AI “wakes up,” when the world radically transforms. Or maybe we are, and we just don’t know it yet. Regardless, the shape of things to come is coming into focus.
Recontextualizing the Human Psyche
AI is a form of viral intelligence, a fast vector that many humans find fascinating and irresistible. AI is already impacting culture and reshaping the psyche, perhaps changing what it means to be human, and maybe faster than anything in history has since the invention of language.
What we call “the singularity” isn’t one event—it has an anatomy, a terrain. It has seven distinct aspects, a work in progress, which are interrelated yet independent in some ways.
How we meet it, and more specifically how we use AI itself to do so, could make all the difference, given the increasingly complex systems we face that surpass unaided human capability.
Seven Sub-Singularities
- The neuropsychosingularity is opening the black box of subjective experience. What was once metaphor—attention, emotion, memory—is now becoming directly measurable and modifiable through computational psychiatric approaches augmented by AI understanding of brain dynamics1 and interventional psychiatry tools, such as transcranial magnetic stimulation. While the human psyche is being re-contextualized relative to this new intelligence, better tech will allow us to deeply understand the way the brain works.
- The intelligence singularity isn’t so much about AI replacing humans—it’s about hybrid minds becoming commonplace. AI has become a co-thinker, a holding environment for thought that mirrors and amplifies us. The real risk isn’t replacement but that machine logic quietly overwrites our capacity for authentic reflection, or if we become deskilled and mentally feeble. We shouldn’t let that happen.
- The biodigital singularity blurs the boundary between organism and machine. Tech like CRISPR and synthetic biology make life programmable; closed-loop biofeedback systems turn bodies into instruments; brain-computer interfaces bridge the gap. As we create something in our own image and develop relationships with it, it becomes a narcissistic mirror with a reflection that may change who we are. Some people will opt out, while others will become fully integrated.
- The **information singularity **destabilizes truth at its heart. Algorithms mediate how we know what we know. AI-generated content saturates social media, a biodigital fantasia—memes and impossible videos circulate freely, detached from stable referents. That inner sense of security we need may be getting washed away at its foundations. We are on an epistemological landscape where truth is ever-shifting sands, both external and internal reality. The inner sense of doubt about reality can infect one’s own sense of self, leading needlessly toward existential crisis.
The** technointimacy singularity** changes human intimacy on a fundamental level, driving us apart while at the same time creating new opportunities for digital connection, driving a greater need for in-person human experiences (e.g., experiential marketing, immersive artistic, media/entertainment, and recreational activities). With devices as constant companions, we’re developing new forms of intimacy while also drifting further apart—mediated connection through digital empathy and algorithmic attunement. Augmented reality, mediated by smart technologies like relational brain-computer interfaces, might allow us to be more connected than ever, in one another’s heads, quite literally.
We can also connect instantly with people on the other side of the planet. Who’s relating to whom? The person, the interface, or the projection? How do we know who is on the other side of the connection? Are they human? If so, are they who they say they are? Is it a bot? Does it matter? There are already services that offer to prove and certify that the user is, in fact, human. 1.
The socioeconomic singularity is likely to lead to widespread suffering as jobs vanish in great numbers at a pace too rapid to adjust. Break it, and fix it later, is the mantra of the day as many entrepreneurs, leaders and technologists race forward without heeding the consequences. As automation undermines the sense of personal worth and purpose many derive from work and deprives them of access to resources required for basic survival, we face the specter of loss—old ego structures disintegrate, and new forms must emerge to carry forward.
Society is renegotiating purpose, but we don’t know where it is headed or whether we’ll have any say. The productivity singularity, as it has been called, falls apart under this model as hundreds of millions of people are projected to become unemployed by 2030, as global productivity rises correspondingly. Without paying customers, the current economies won’t work. What solutions are available? 1.
The ethico-ecological singularity: The health of the world is also hotly contested, with ecological impact and climate change caught in a tug-of-war among various interest groups and ideologies. We already face thorny ethical challenges around equity and basic human rights. This will rapidly escalate as AI accelerates socioeconomic changes, forcing the issue.
There is the potential for global emergent consciousness, collective awareness to hit some tipping point, as information technology and human engagement knit together a vast, complex system grounded in human consciousness and enhanced intelligence. The conscience and compassion piece would have to evolve alongside growing capabilities—the difference between technological intelligence (which can be omnipotent but fragmented) and genuine wisdom (which integrates responsibility with capability). Or market forces will not favor mass catastrophe, leading to pragmatic solutions not necessarily grounded in humanistic values.
Wisdom Wasted on the Wise?
Uniquely significant in this epoch, we’ve created our own evolutionary pressure. To the extent that we understand reality, we gain the capacity to shape it—to build what works and guide events toward something coherent with our aims.
Where these two understandings—of outer and inner worlds—meet in awareness, we approach wisdom: a harmony of intelligence, perspective, and shared intention. But the complexity of our systems is outstripping our unaided capacity. We need to be able to act fast, especially where guardrails are required, such as around AI safety in mental health (Brenner & Appel, 2025), and more generally.
Technology, ramping up faster and faster with AI, is beginning to select for different traits, but could also enhance our ability to thrive in this complex emerging environment. Moreover, AI will produce fundamental discoveries across the sciences and in technology, bootstrapping its own ascent.
AI risks decentering human primacy. Our inventions reflect us the way dreams do—condensed, distorted, revelatory. The true singularity is relational, the threat and the promise. Evolution no longer limits us—we limit evolution.
References
1. AI itself will allow us to work with the human brain and mind better than we have been able to, to date, with conventional computing and the remarkable but limited bihemispheric human brain. It also reshapes the individual human psyche and our relationship patterns due to direct interactions with AI agents, as well as in terms of our self-image and how that is being recontextualized. We’re gaining operational access to consciousness itself, with proposed models such as Integrated Information Theory, which might provide a framework to empirically determine if a system is conscious.
Brenner, G. H., & Appel, J. M. (2025). Toward a Framework for AI Safety in Mental Health: AI Safety Levels-Mental Health (ASL-MH). Neuromodec Journal. https://neuromodec.org/2025/10/toward-a-framework-for-ai-safety-in-ment…
ExperiMentations Blog Post (“Our Blog Post”) is not intended to be a substitute for professional advice. We will not be liable for any loss or damage caused by your reliance on information obtained through Our Blog Post. Please seek the advice of professionals, as appropriate, regarding the evaluation of any specific information, opinion, advice, or other content. We are not responsible and will not be held liable for third-party comments on Our Blog Post. Any user comment on Our Blog Post that, in our sole discretion, restricts or inhibits any other user from using or enjoying Our Blog Post is prohibited and may be reported to Sussex Publishers/Psychology Today. Grant H. Brenner. All rights reserved.