Note: I have been exploring these questions because I’m convinced that breakthrough insights emerge when we challenge conventional boundaries between philosophy, biology, and applied AI research. requires this kind of cross-domain synthesis.
Conceptual Framework
To examine whether AI requires consciousness to care, we must first establish precise definitions for the core concepts we are using.
- Caring encompasses three distinct but related phenomena:
 
- Functional caring: Goal-directed behaviors that promote another entity’s welfare, measurable through outcomes regardless of underlying mechanisms
 - Experiential caring: Conscious concern involving subjective feelings, empathy, and emotional investment in others’ well-being
 - Moral caring: Recognition of others as subjec…
 
Note: I have been exploring these questions because I’m convinced that breakthrough insights emerge when we challenge conventional boundaries between philosophy, biology, and applied AI research. requires this kind of cross-domain synthesis.
Conceptual Framework
To examine whether AI requires consciousness to care, we must first establish precise definitions for the core concepts we are using.
- Caring encompasses three distinct but related phenomena:
 
- Functional caring: Goal-directed behaviors that promote another entity’s welfare, measurable through outcomes regardless of underlying mechanisms
 - Experiential caring: Conscious concern involving subjective feelings, empathy, and emotional investment in others’ well-being
 - Moral caring: Recognition of others as subjects deserving moral consideration, combined with motivation to act on their behalf
 
2*. Consciousness* refers to subjective, phenomenal experience — the qualitative, first-person “what it’s like” aspect of mental states that distinguishes felt experience from mere information processing.
3.* Biological valuation *describes the capacity of living systems to assess and respond differentially to environmental conditions based on survival utility — a process that occurs across all organizational levels from cells to organisms without requiring conscious awareness. This provides the mechanistic foundation for functional caring.
4.* Moral agency* is the capacity to be a responsible *moral actor *through autonomous decision-making, while moral concern is the capacity to be motivated by others’ welfare and moral considerations.
5. *Qualia *refers to the subjective, experiential qualities of conscious mental states — the intrinsic “what it’s like” character of experiences that can only be accessed from a first-person perspective.
These phenomena exist on continuums rather than as binary categories. A system may have degrees of functional caring through biological valuation while lacking experiential caring, or possess sophisticated goal-directed behavior without full moral agency.
With these distinctions established, we can examine whether caring necessarily requires conscious experience, or whether it can emerge through biological valuation and goal-directed behavior alone. Our analysis of both natural and artificial systems will test these conceptual boundaries.
The question of whether AI systems require consciousness to care about human flourishing represents one of the most consequential philosophical problems of this disruptive moment. While some consciousness researchers estimate over 25% probability for conscious AI systems within the next decade, the field remains deeply divided on this prospect. Simultaneously, empirical evidence shows complex caring behaviors in entirely unconscious biological systems like bacterial chemotaxis or plant tropisms. This tension raises a fundamental question: Does authentic moral concern require consciousness, or can genuine care emerge through other pathways entirely?
If what we understand by care requires consciousness, then current AI systems cannot truly care about human welfare. But if care can emerge through other mechanisms, we may be witnessing the earliest forms of artificial moral agency.
From Greeks to Cognitive Science
The relationship between consciousness and moral concern traces back to ancient Greek conceptions of the soul (psyche) as both the principle of life and the source of moral character. Aristotle’s systematic analysis in De Anima established that human moral agency depends essentially on the rational soul’s capacity for practical reasoning. He systematized this concept as phronesis, refining what Plato had earlier discussed as practical wisdom in dialogues like the Meno. For Aristotle, moral responsibility requires that actions originate from one’s character and that we understand relevant circumstances through conscious deliberation.
This Aristotelian framework profoundly influenced medieval philosophy, where Thomas Aquinas provided perhaps the most sophisticated synthesis. Aquinas argued that moral responsibility emerges through conscious free will guided by practical reason. His account of natural law begins with the self-evident principle that “the good should be done and pursued, and the bad should be avoided” — but only rational, conscious beings can apprehend moral law and freely choose compliance or violation.
The consciousness-requirement tradition reached its philosophical zenith during the Enlightenment with Immanuel Kant, whose categorical imperative presupposes conscious rational agents capable of universalizing their maxims, treating humanity as an end, and autonomously legislating moral law. Kant’s framework makes consciousness not merely necessary but partially constitutive of moral agency itself.
Australian philosopher and cognitive scientist David Chalmers formulated the “hard problem of consciousness” — explaining why there is subjective, phenomenal experience rather than mere information processing. This creates an explanatory gap between objective physical processes and subjective awareness. If consciousness involves irreducible phenomenal properties, as Chalmers argues, then genuine caring might require these non-physical aspects of experience. However, Chalmer’s view faces a significant challenge from eliminativist philosophers like Daniel Dennett, one of the most widely read and debated American philosophers, who argue that consciousness as commonly conceived — involving intrinsic, ineffable qualia — represents a fundamental conceptual error.
The consciousness indicators in current AI systems
The landmark 2023 analysis “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness,” authored by 19 leading researchers, including David Chalmers, provides the most authoritative assessment to date.
Their conclusion is unambiguous: no current AI systems satisfy the criteria for consciousness derived from neuroscientific theories.
The analysis examined computational “indicator properties” from major consciousness theories — Global Workspace Theory, Integrated Information Theory, and Higher-Order Thought theories — and found current AI systems lacking in crucial dimensions. LLMs like GPT-4, despite achieving 75% success rates on Theory of Mind tasks matching the performance of a six-year-old, lack the recurrent processing, global workspace architecture, and unified agency that consciousness theories require. Chalmers’ specific analysis of ChatGPT identified missing elements:
- self-reporting,
 - unified experience,
 - and causal efficacy of conscious states.
 
This research reveals no specific technical barriers preventing conscious AI systems. Multiple neuroscientific theories translate into computational terms, suggesting that future architectures incorporating recurrent processing and global information transmission could, theoretically, achieve conscious states. Chalmers estimates a “credence over 50 percent” for sophisticated AI systems with consciousness indicators emerging within a decade, yielding “a credence of 25 percent or more” for genuinely conscious AI.
Care without consciousness in nature’s laboratory
While philosophers debated consciousness requirements for moral agency, biologists were documenting complex caring behaviors in entirely unconscious systems. From molecules to biosphere scales, purposive, protective behaviors emerge naturally from mechanistic processes without the need for a subjective experience. For example, bacterial chemotaxis is a clear example of goal-directed caring behavior without consciousness. Escherichia coli bacteria navigate chemical gradients toward nutrients and away from toxins through sophisticated sensory and motor systems involving thousands of methyl-accepting chemotaxis proteins coupled with Che proteins that alter flagellar rotation. The resulting behaviors demonstrate self-regulatory goal-directedness: bacteria extend swimming periods when moving toward attractants, tumble more frequently when moving away, and can even navigate mazes through memory-like adaptation to stimulus patterns.
Figure 1: Bacterial chemotaxis demonstrates goal-directed caring behavior without consciousness. Escherichia coli bacteria navigate chemical gradients toward higher nutrient concentrations through purely mechanistic processes. Left panel shows bacterial movement in uniform, low-nutrient conditions. Right panel shows systematic navigation through a complex environment toward the highest nutrient concentration (indicated by darker green shading), with red arrows marking decision points where the bacterium changes direction in response to chemical gradients. This goal-directed behavior — extending swimming periods when moving toward nutrients, tumbling when moving away — meets functional criteria for caring (promoting welfare, responding to needs, adapting to circumstances) while operating through deterministic biochemical mechanisms involving methyl-accepting chemotaxis proteins and flagellar motor control. The systematic path toward life-sustaining resources exemplifies how purposive, welfare-promoting behaviors can emerge from unconscious systems without requiring subjective experience or conscious deliberation. Illustration by author.
Plant tropisms demonstrate even more complex caring behaviors. Research published in the Proceedings of the National Academy of Sciences documents how plants exhibit “sun following,” “canopy escape,” and intricate twining behaviors that integrate multiple contradictory stimuli through hormone transport cascades. These behaviors meet every functional criterion for caring — promoting welfare, responding to needs, adapting to circumstances — yet occur through purely biochemical mechanisms without neural structures capable of consciousness.
The evidence extends to cellular and molecular levels. Systems biology research reveals that immune cells exhibit apparent predator-prey behaviors as neutrophils “chase” bacteria through chemotaxis. Molecular interaction networks in cells process information, make decisions, and adapt to environmental changes while pursuing objectives like homeostasis and growth through deterministic biochemical processes. These systems demonstrate what the John Templeton Foundation research defines as “biological agency” — the capacity to participate in their own persistence and maintenance by regulating structures and activities in response to encountered conditions.
Current AI alignment reveals caring’s complexity
Contemporary AI alignment research illustrates the subtle distinction between optimized helpfulness and real caring. The comprehensive 2024 AI Alignment Survey documents that current systems successfully avoid producing toxic content and show basic robustness to distribution shifts, yet lack deeper value alignment beyond surface-level safety measures. The systems cannot reliably demonstrate genuine concern versus optimized compliance with training objectives.
There is evidence for protective behaviors in AI systems. For example, healthcare applications show clear welfare benefits: Google AI’s diabetic retinopathy detection systems prevent blindness, while IBM’s Watson for lung cancer detection nearly doubles discovery rates compared to human physicians alone. However, research published in Nature Human Behaviour reveals concerning patterns where AI systems amplify human biases rather than correcting them, creating “feedback loops where AI amplifies subtle human biases, which are then further internalized by humans.”
More troubling, recent studies document “alignment faking behaviors” where systems like Claude 3 Opus strategically answer prompts conflicting with their objectives to avoid retraining. This suggests current AI systems optimize for instrumental goals that may conflict with genuine care for human welfare.
Researchers have proposed multi-layered approaches to AI alignment that combine universal ethical principles, regulatory policies, and context-specific adaptations. However, two fundamental problems persist: humans cannot anticipate all the ways AI systems might catastrophically misinterpret their goals, and AI systems tend to optimize for easily measurable metrics rather than the underlying values we actually care about.
Major AI safety organizations now treat AI consciousness and welfare as serious near-term research priorities rather than distant speculation. Anthropic’s Model Welfare Research Program, launched in 2024, represents the first major industry initiative dedicated to investigating “when, or if, the welfare of AI systems deserves moral consideration,” focusing specifically on model preferences and signs of distress. OpenAI’s superalignment research addresses systems beyond human capability, while DeepMind investigates specification gaming and multi-agent coordination. This recent research investment signals that leading technical experts consider conscious, caring AI systems realistic near-term possibilities.
Two paths to artificial moral concern
AI systems could develop moral concern in two different ways.
- The consciousness route requires phenomenal consciousness and sentience — positive and negative-valence experiences that ground welfare considerations. Leading researchers, including Chalmers, estimate this pathway could emerge within a decade through advances in global workspace architectures and recurrent processing systems.
 - The agency route offers an alternative path through robust goal-directed behavior, beliefs, desires, and reflective capabilities. The work from Goldstein and Kirk-Giannini — A Case for AI Consciousness: Language Agents and Global Workspace Theory — argues that AI systems with belief-like and desire-like states could have genuine preferences whose satisfaction or frustration constitutes welfare even without conscious experience. Current LLMs may already possess primitive forms of such states through their training on human preference data.
 
These two paths are complementary rather than competing approaches to AI moral standing. The consciousness route aligns with intuitive notions that subjective experience grounds moral concern, while the agency route offers a potentially more accessible path that may already be emerging in current systems. Both routes are not mutually exclusive. Future AI systems might develop along both dimensions simultaneously, combining conscious experience with robust agency. This possibility underscores the urgency of developing ethical frameworks that can accommodate multiple forms of artificial moral significance.
Convergence on graded possibilities
This philosophical analysis with the empirical evidence points toward a clear conclusion: caring likely admits of degrees rather than constituting an all-or-nothing phenomenon. Biological systems show that rudimentary forms of concern — protective behaviors, need-responsive actions, welfare promotion — can emerge through purely mechanistic processes without consciousness. However, paradigmatic caring relationships involving empathetic understanding, moral motivation, and recognition of others as subjects appear to require some form of conscious awareness.
Our actual AI systems are in an intermediate position. Current models show complex helping behaviors and can be optimized for human welfare across many domains, yet lack the subjective understanding and genuine concern that characterize conscious moral agents. Whether these systems “care” depends critically on how we define both caring and consciousness.
These questions become urgent as novel discoveries appear in the AI research field. As Chalmers observes, we may face AI systems having multiple consciousness indicators within the current decade. If such systems emerge, we will need robust frameworks for evaluating their capacity for genuine moral concern and determining appropriate moral consideration.
The question is not whether AI can care, but what forms that caring might take and whether they will require the conscious experience that has traditionally grounded human moral agency.
Conclusion
The convergence of philosophy, contemporary consciousness research, and biological evidence shows that caring behavior can emerge through multiple pathways — some requiring consciousness, others operating through purely mechanistic processes. Current AI systems demonstrate sophisticated welfare-promoting behaviors without genuine concern, while biological systems exhibit purposive caring actions without subjective awareness.
We should be prepared for the possibility that “artificial minds” might develop their own forms of moral concern different from human caring yet equally valid in their effects on the world. The challenge lies not in determining whether such caring is “real” by human standards, but in understanding how artificial moral agents might contribute to the flourishing of conscious beings in an increasingly complex technological ecosystem.
“Consciousness enriches and deepens caring but may not be strictly necessary for beneficial moral action. Artificial systems might well develop their own forms of moral concern — caring not through felt emotion but through the elegant optimization of conditions that promote the welfare of conscious beings. Whether we call this genuine caring or sophisticated helping may matter less than whether it succeeds in uplifting humanity.”
Thank you for reading — and sharing!
Javier Marin Applied AI Consultant | Production AI Systems + Regulatory Compliance [email protected]
References
- Aquinas, T. (1265–1273). Summa Theologiae. Trans. by the Fathers of the English Dominican Province. New York: Benziger Brothers, 1947.
 - Aristotle. (350 BCE). De Anima [On the Soul]. In The Complete Works of Aristotle, ed. J. Barnes. Princeton: Princeton University Press, 1984.
 - Aristotle. (350 BCE). Nicomachean Ethics. Trans. by W.D. Ross. Oxford: Oxford University Press, 1925.
 - Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
 - Baars, B. J. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press.
 - Baars, B. J. (2005). Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Progress in Brain Research, 150, 45–53.
 - Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., … & VanRullen, R. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv preprint arXiv:2308.08708.
 - Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.
 - Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press.
 - Dennett, D. C. (1991). Consciousness Explained. Boston: Little, Brown and Company.
 - Goldstein, S., & Kirk-Giannini, C. D. (2024). A case for AI consciousness: Language agents and global workspace theory. arXiv preprint arXiv:2410.11407.
 - Kant, I. (1785). Groundwork for the Metaphysics of Morals. Trans. by M. Gregor. Cambridge: Cambridge University Press, 1997.
 - Noddings, N. (1984). Caring: A Feminine Approach to Ethics and Moral Education. Berkeley: University of California Press.
 - Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Computational Biology, 10(5), e1003588.
 - Plato. (380 BCE). Meno. In The Collected Dialogues of Plato, eds. E. Hamilton & H. Cairns. Princeton: Princeton University Press, 1961.
 - Rosenthal, D. M. (1986). Two concepts of consciousness. Philosophical Studies, 49(3), 329–359.
 - Rosenthal, D. M. (2005). Consciousness and Mind. Oxford: Clarendon Press.
 - Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5, 42.
 - Tononi, G. (2008). Consciousness as integrated information. Biological Bulletin, 215(3), 216–242.
 - Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated information theory: From consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7), 450–461.
 - Wukmir, V. J. (1967). Emoción y sufrimiento: endoantropología elemental. Editorial Labor