Advertisement
Guest Essay
Nov. 8, 2025
Credit...Jon Han
Barbara Gail Montero
Dr. Montero is a philosophy professor who writes on mind, body and consciousness.
Not long ago, A.I. became intelligent. Some may dismiss this claim, but the number of people who doubt A.I.’s acumen is dwindling. According to a 2024 YouGov poll, a clear majority of U.S. adults say that computers are already more intelligent than people or will become so in the near future.
Still, you might wonder, is A.I. actually intelligent? In 1950, the mathematician Alan Turing suggested that this is the wrong question to ask because it is too vague to merit scientific investigation. Rather than try to determine whether computers are intelligent, he argued, we should…
Advertisement
Guest Essay
Nov. 8, 2025
Credit...Jon Han
Barbara Gail Montero
Dr. Montero is a philosophy professor who writes on mind, body and consciousness.
Not long ago, A.I. became intelligent. Some may dismiss this claim, but the number of people who doubt A.I.’s acumen is dwindling. According to a 2024 YouGov poll, a clear majority of U.S. adults say that computers are already more intelligent than people or will become so in the near future.
Still, you might wonder, is A.I. actually intelligent? In 1950, the mathematician Alan Turing suggested that this is the wrong question to ask because it is too vague to merit scientific investigation. Rather than try to determine whether computers are intelligent, he argued, we should see if they can respond to questions in a manner indistinguishable from that of human beings. He saw this test, now known as the Turing test, not as a benchmark of computer intelligence but as a more pragmatic substitute for that benchmark.
Instead of presuming to define intelligence and then asking whether A.I. meets that definition, we do something more dynamic: We interact with increasingly sophisticated A.I., and we see how our understanding of what counts as intelligence changes. Turing predicted that eventually “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
Today we have reached that point. A.I. is no less a form of intelligence than digital photography is a form of photography.
And now A.I. is on its way to doing something even more remarkable: becoming conscious. This will happen in the same way it became intelligent. As we interact with increasingly sophisticated A.I., we will develop a better and more inclusive conception of consciousness.
You might object that this is a verbal trick, that I’m arguing that A.I. will become conscious because we’ll start using the word “conscious” to include it. But there is no trick. There is always a feedback loop between our theories and the world, so that our concepts are shaped by what we discover.
Consider the atom. For centuries, our concept of the atom was rooted in an ancient Greek notion of indivisible units of reality. As late as the 19th century, physicists like John Dalton still conceived of atoms as solid, indivisible spheres. But after the discovery of the electron in 1897 and the discovery of the atomic nucleus in 1911, there was a revision of the concept of the atom — from an indivisible entity to a decomposable one, a miniature solar system with electrons orbiting a nucleus. And with further discoveries came further conceptual revisions, leading to our current complex quantum-mechanical models of the atom.
These were not mere semantic changes. Our understanding of the atom improved with our interaction with the world. So too our understanding of consciousness will improve with our interaction with increasingly sophisticated A.I.
Skeptics might challenge this analogy. They will argue that the Greeks were wrong about the nature of the atom, but that we aren’t wrong about the nature of consciousness because we know firsthand what consciousness is: inner subjective experience. A chatbot, skeptics will insist, can report feeling happy or sad, but only because such phrases are part of its training data. It will never know what happiness and sadness feel like.
But what does it mean to know what sadness feels like? And how do we know that it is something a digital consciousness can never experience? We may think — and indeed, we have been taught to think — that we humans have direct insight into our inner world, insight unmediated by concepts that we have learned. Yet after learning from Shakespeare how the sorrow of parting can be sweet, we discover new dimensions in our own experience. Much of what we “feel” is taught to us.
The philosopher Susan Schneider has argued that we would have reason to deem A.I. conscious if a computer system, without being trained on any data about consciousness, reports that it has inner subjective experiences of the world. Perhaps this would indicate consciousness in an A.I. system. But it’s a high bar, one that we humans would probably not pass. We, too, are trained.
Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians.
Just as A.I. has prompted us to see certain features of human intelligence as less valuable than we thought (like rote information retrieval and raw speed), so too will A.I. consciousness prompt us to conclude that not all forms of consciousness warrant moral consideration. Or rather, it will reinforce the view that many already seem to hold: that not all forms of consciousness are as morally valuable as our own.
Barbara Gail Montero is a professor of philosophy at the University of Notre Dame and the author of “Philosophy of Mind: A Very Short Introduction.”
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
Advertisement