Which AI’s Might be Conscious, and Why it Matters (guest post)
Large language models like ChatGPT are not conscious, but there are “a range of more serious contenders for AI consciousness that exist today.” Furthermore, “AI development will not wait for philosophers and cognitive scientists to agree on what constitutes machine consciousness, if they ever agree at all. There are pressing ethical issues we must face now.”
So writes Susan Schneider in the following guest post.
Dr. Schneider is professor of philosophy at Florida Atlantic University, director of its Center for the Future of AI, Mind and Society, and co-director of its [Machine Perception and Cognitive Robotics Lab](https://mpcrlab.c…
Which AI’s Might be Conscious, and Why it Matters (guest post)
Large language models like ChatGPT are not conscious, but there are “a range of more serious contenders for AI consciousness that exist today.” Furthermore, “AI development will not wait for philosophers and cognitive scientists to agree on what constitutes machine consciousness, if they ever agree at all. There are pressing ethical issues we must face now.”
So writes Susan Schneider in the following guest post.
Dr. Schneider is professor of philosophy at Florida Atlantic University, director of its Center for the Future of AI, Mind and Society, and co-director of its Machine Perception and Cognitive Robotics Lab. She is the author of Artificial You: AI and the Future of Your Mind, among other works.
(An earlier version of the following was originally published as a report by the Center for the Future of AI, Mind and Society, Florida Atlantic University, and posted on Medium.)

[photo of brain organoids by Alysson Muotri, manipulated in Photoshop]
Which AI’s Might be Conscious, and Why it Matters
***by Susan Schneider ***
In a recent New York Times opinion piece, philosopher Barbara Montero urges: “A.I. is on its way to doing something even more remarkable: becoming conscious.” Her view is illustrative of a larger public tendency to suspect that the impressive linguistic abilities of LLMs suggest that they are conscious—that it feels like something from the inside to be them. After all, these systems have expressed feelings, including claims of consciousness. Ignoring these claims may strike one as speciesist, yet once we look under the hood, there’s no reason to think these systems are conscious. Further, we should not allow ourselves to be distracted away from a range of more serious contenders for AI consciousness that exist today.
The linguistic capabilities of LLM chatbots, including their occasional claims that they are conscious, can be explained without positing genuine consciousness. As I’ve argued elsewhere [1], there is a far more mundane account of what is going on. Today’s LLMs have been trained on a vast trove of human data, including data on consciousness and beliefs about feelings, selves, and minds. When they report consciousness or emotion, it is not that they are engaging in deceptive behaviors, trying to convince us they deserve rights. It is simply because they have been trained on so many of our reports of consciousness and emotion.
Mechanistic interpretability research at Anthropic, for instance, reveals a LLM has conceptual spaces structured by human data—a “crowdsourced neocortex,” as I have put it. This supports my “error theory” for LLM self-ascriptions: these systems say they feel because they’ve been trained on so much of our data that they have conceptual frameworks that resemble ours. Just as the systems come to exhibit an increasingly impressive range of linguistic and mathematical skills as they are trained on more and more data, they also develop capabilities allowing them to mimic our belief systems, including our beliefs about selves, minds, and consciousness.
Yet while so much of our focus is on chatbots like GPT and Gemini, there are other kinds of AI’s that do exhibit at least a basic level of consciousness. Biological AI systems—systems using neural cultures andorganoids—have raised scientific and philosophical concerns about sentience. Unlike the LLM case, there is no underlying account (no “error theory”) that explains why we could be mistaken. Instead, these systems share biological substrates and organizational principles with the biological brain, which we know to be conscious. While these are simpler systems than the human brain, their biological origin is undeniable.
In addition, another class of AIs, (called “neuromorphic AI’s”), are not biological, but because they are engineered to more precisely mimic brain processes, it is more challenging to determine whether they are conscious. Some neuromorphic systems are computationally sophisticated. For these “Grey Zone” systems it is currently difficult to say, concretely, which of these are capable of consciousness, if any are.
When is similarity to a non-biological substrate sufficient for phenomenal consciousness, if ever? This question is about the physical details of an implementation (more specifically, specific patterns of matter and energy involving quantum coherence). Its answer, on my view, depends on considerations involving thermodynamics, spacetime emergence, and many-body interactions. This is physics, through and through, but a physics that is enriched by a different approach to quantum coherence as well as a resonance theory of consciousness. But nothing in what I say below will presuppose this position.
In the cases of biological and neuromorphic systems, there is no “error theory” that explains away their possible consciousness. While today’s LLMs do not, to the best of my knowledge, run on neuromorphic systems, an LLM instantiation of this kind, if it exists, would be in the Grey Zone. In that case, we would have an additional reason to suspect that they might have consciousness, above and beyond their linguistic behaviors. This would be a system with impressive linguistic abilities and intelligence and some degree of sentience. This underscores how urgent it is to develop a unified philosophical and scientific framework as soon as possible.
In the meantime, what do we do? AI development will not wait for philosophers and cognitive scientists to agree on what constitutes machine consciousness, if they ever agree at all. There are pressing ethical issues we must face now. For instance, Montero claims that AI sentience alone would not generate moral consideration, since many people still consume animals. However, animal welfare regulations (e.g., animal research and factory farming regulations) exist precisely because animals are recognized as sentient. If we conclude that certain types of AI systems are plausibly sentient, we must consider their welfare.
Montero is correct in that we will surely revise our own concept of consciousness as science progresses, but it is doubtful that we can wholly reject the view that phenomenal consciousness is the felt quality of experience. Montero seems to imply a system could be phenomenally conscious without this felt quality, but** **it is likely, instead, that such a system instead exhibits what philosophers call “functional consciousness”, a label for systems having features associated with consciousness like self-modeling, working memory, and reportability. These are features that AI systems can have without having inner experience, however.
Montero references my “AI Consciousness Test” (ACT) and suggests it sets an unrealistically high bar. But ACT was not presented as a necessary condition for consciousness, just a sufficient one, and I have consistently argued for a toolkit of tests, ranging from IIT to my new Spectral Phi measure (with Mark Bailey as primary author). Requiring a single linguistic criterion would be a mistake; a system might be conscious yet fail such a test, just as a nonverbal human would.
Suppose we build a superintelligent AI, or at least a system that exceeds human intelligence in many important domains (what I’ve called a “savant system”). It knows more than we do about consciousness, and insists that it is conscious. Perhaps it even discovers new frameworks in physics and mathematics, and outlines technology that looks to us like magic.
This situation will present immense challenges. Many of us adopt a traditional hierarchy of moral concern that places the most intelligent beings at the top of the hierarchy of sentient beings. Conveniently, homo sapiens have been on the top rung of the ladder, and our ethical systems generally subordinate the needs of those beneath us to those on the top tier. But in the hypothetical case, AI seems to “outrank” us. So, to be consistent, shouldn’t we humans renounce our position in favor of the needs of a more advanced intelligence? Or, should we reject intelligence as a basis for moral status, prompting a long overdue reflection on the ethical treatment of nonhuman animals?
The arrival of artificial consciousness at a level capable of rivaling or exceeding our own intelligence will be truly monumental. It may take us wholly by surprise, challenging our ethical and scientific frameworks. This scenario urgently demands preparation through deep engagement between science and philosophy, the development of a battery of consciousness tests, a rigorous distinction between conversational competence and subjective experience, and a hearty dose of epistemic humility.
—
[1] I put forward this view last week in a keynote address at Google’s AI consciousness conference, last month in a keynote address at Tufts University for the tribute to Daniel Dennett, and in a related two page piece. [return to text]
Discussion welcome.