Abstract
Practical wisdom (phronesis) is the context-sensitive capacity to skillfully achieve morally good outcomes in complex situations. Developing artificial practical wisdom offers a more ethically robust and achievable goal for AI development than artificial general intelligence (AGI). While identifying what is morally good in ethically complex situations remains challenging, grounding artificial practical wisdom explicitly in compassion effectively reduces ethical risks associated with AI-induced suffering, surpassing conventional alignment strategies like rule-based guardrails or predefined reward systems. As a theoretical foundation for initial development of artificial practical wisdom, this virtue ethics approach integrates Aristotelian practical wisdom with cross-cu…
Abstract
Practical wisdom (phronesis) is the context-sensitive capacity to skillfully achieve morally good outcomes in complex situations. Developing artificial practical wisdom offers a more ethically robust and achievable goal for AI development than artificial general intelligence (AGI). While identifying what is morally good in ethically complex situations remains challenging, grounding artificial practical wisdom explicitly in compassion effectively reduces ethical risks associated with AI-induced suffering, surpassing conventional alignment strategies like rule-based guardrails or predefined reward systems. As a theoretical foundation for initial development of artificial practical wisdom, this virtue ethics approach integrates Aristotelian practical wisdom with cross-cultural perspectives on suffering and compassion from utilitarianism, the Capability Approach, Buddhism, and contemporary moral psychology. Operationalizing compassionate AI involves recognizing suffering, empathetic engagement, context-sensitive moral decision making, and motivational responses. Compassionate AI not only serves as a foundation for broader practical wisdom development but also demonstrates immediate practical benefits, particularly in healthcare, by measurably improving patient outcomes, enhancing well-being, and reducing caregiver burdens.
Similar content being viewed by others



1 Introduction
Tracing back at least to Aristotle, practical wisdom describes the ‘common sense’ that savvy adults use to bring about morally good, long-term outcomes by knowingly and skillfully making the right decision in complex situations. Many concerns about AI futures could be addressed by constructing artificial practical wisdom rather than artificial general intelligence (AGI). AI alignment and safety research strives to orient future AI in a positive direction with good values and to prevent harmful outcomes, given uncertainty about the future. Current approaches to AI alignment rely on external controls, like guardrails, reward systems, and behavioral constraints, that fail to address the fundamental challenge of creating AI systems capable of moral judgment in complex, uncertain situations. Virtue ethics has long recognized that in addition to good end goals (telos) prudent ethical decisions also require dispositions (or habits or character) and the capacity to mediate among possible good choices [1,2,3,4]. Although guardrails and other AI governance mechanisms have their place, constructing at least some AI to incorporate practical wisdom makes the desired prudential outcomes more likely.
Practical wisdom also requires conceptions of an ultimate Good to guide moral deliberation and decision making. One promising approach is to focus initially on a multi-faceted, cross-cultural understanding of compassion [5], which can directly address fears that AI or AGI would increase human and ecological suffering. Practical moral application, through bioethics [6] and other applied ethics, depends upon shared perspectives of the Good [7], or at least a process to reconcile conflicting perspectives [8]. Yet AI lacks the shared embodiment of humans that underlies human capacities to experience suffering and joy with another person and that underlie some ethical approaches. For example, while utilitarian ethics relies fundamentally upon eudemonic happiness or hedonic pleasures, current AI lacks the capacity to recognize, much less utilize, those human-specific constructs to any real depth [9, 10]. Furthermore, some ethicists [11,12,13] have warned that if AI were to simulate empathy, it might manipulate people emotionally, though others have argued that a form of empathetic awareness is important for AI to function ethically in human-centered environments [14].
Developing artificial practical wisdom grounded in compassion provides an ethically robust approach and more coherent and operational path to AI alignment and safety than current technical and governance approaches, reducing risks of suffering and enhancing human flourishing, particularly in sensitive domains such as healthcare. This approach draws on virtue ethics to create AI systems with stable internal dispositions constituting a robust and contextually sensitive framework oriented toward alleviating and addressing suffering. Practical wisdom (phronesis) can guide AI behavior more effectively than conventional alignment strategies; and grounding practical wisdom in compassion explicitly targets suffering, aligning with established virtue, utilitarian, and Buddhist ethical theories. Healthcare applications can immediately demonstrate the viability and measurable impact of compassionate AI, while laying a foundation for more complete accounts of artificial practical wisdom.
The following five sections explore how practical wisdom can be incorporated into AI through virtue ethics, with a particular emphasis on compassion as a guiding virtue. First, we frame concerns about AI risks and futures in terms of suffering and flourishing, examine the limitations of current approaches to AI alignment in addressing the role suffering plays in those concerns, and propose practical wisdom grounded in response to suffering as a novel approach to AI alignment. Then, we draw upon psychological and philosophical studies of practical wisdom, especially moral deliberation and a general theory of the Good, to ground practical wisdom in alleviating and addressing suffering and promoting flourishing as a tractable and valuable initial foundation for full artificial practical wisdom. In Sect. 4, we augment examination of suffering-focused practical wisdom by drawing upon consequentialist, Buddhist, and scientific perspectives. In part because virtue ethics approaches can be harder to operationalize than deontological and consequential approaches, we examine the operationalization of compassion in AI systems, including implementation challenges, before concluding with a focused application in healthcare and implications for broader deployment.
2 AI alignment & human suffering
Many concerns about AI ethics, alignment, and safety are about the risk of human suffering. One’s employment provides resources needed for daily living, and AI job displacement threatens livelihoods, family resources, and personal and professional identities [15]. AI systems trained on biased data can perpetuate and even amplify societal bias and discrimination, leading to the increased suffering of unrepresented groups [16,17,18]. The vast data-gathering capabilities of AI systems create new threats to privacy, and this lack of privacy affects self-integrity and freedom to selectively disclose oneself, making people vulnerable and feeling exposed, which impacts self-esteem [19, 20]. Loss of control affects one’s self and self-efficacy and increases real threats of pain. Becoming dependent upon AI and losing personally or professionally meaningful skills affects identity and creates loss [21]. Risk of annihilation and other scenarios may ultimately eliminate pain but still create suffering through the destroying of selves, a concern voiced by those studying existential risks of AI [22]. Although uncertainty exists up to and including an AI singularity, the unforeseen negative consequences likely involve human suffering.
Many hopes for AI futures align with flourishing and alleviating suffering. Improvements to healthcare mitigate pain and suffering [23,24,25]. Education increases knowledge and skills, enabling self-efficacy to address situations that would otherwise create personal, family, and social suffering [26]. Progress in managing ecological change through advanced AI reduces human pain and suffering and their extension to other animals and ecological systems [27]. Imagined scientific innovations and economic growth usually highlight improved well-being and flourishing. Reframing the public and ethical discourse about AI acceleration and safety in terms of positive futures that orient toward flourishing and address or alleviate suffering clarifies that discourse and connects it to established ethical traditions drawing upon long-standing ancient wisdom and deeply embedded cultural values.
Research in AI alignment aims to ensure AI systems act consistently with human intentions and values [28]. This includes aligning both the external behavior and internal objectives of AI systems with human norms, ethical principles, and societal goals. These efforts involve technical approaches (e.g., reinforcement learning from human feedback), philosophical approaches (e.g., value pluralism), and sociotechnical mechanisms (e.g., participatory design or regulatory oversight). Yet alignment is often constrained by ambiguity in value specification, context-sensitivity, and the possibility of emergent misalignment at scale. Practical wisdom offers a way to navigate these challenges by fostering context-aware, ethically sensitive judgment within AI systems.
Some alignment researchers distinguish between outer alignment, which evaluates how AI behavior aligns with human values and preferences, and inner alignment, which addresses whether an AI/ML system internally develops and consistently adheres to these values through training [28,29,30,31]. This distinction closely parallels the Aristotelian conception of practical wisdom, which involves skillfully navigating immediate decisions about proximate goods to achieve more distal moral goods. Most alignment approaches to date focus primarily on external behavioral controls or predefined reward systems, potentially neglecting the adaptive and context-sensitive judgment that practical wisdom requires [28, 32]. Practical wisdom demands that an AI system deliberately cultivates moral excellence through both general and situation-specific ethical discernment to make morally prudent and contextually responsive decisions in complex, uncertain environments. A pivot from aligning with static human values and preferences to developing artificial practical wisdom would make AI systems more ethical, responsible, and safe [32].
Current research has begun exploring intersections of AI and practical wisdom, though primarily focusing on the effect of AI on human practical wisdom and the intellectual components for AI rather than ethical cultivation. Scholars have examined the negative impact AI might have on human practical wisdom [33, 34], ways to make AI wiser [35,36,37], and how intellectual aspects of practical wisdom could be incorporated into AI [38, 39], but do not yet appear to examine how to develop artificial practical wisdom within the context of an ethical theory (virtue ethics). From a virtue ethics perspective, practical wisdom involves developing character traits and habits through deliberate practice, enabling one to consistently make skillful, morally informed decisions aligned with ethical norms (excellence). Habituation and ML training have notable parallels, and practical wisdom and AI alignment have analogous challenges posed by having complex ends. For practical wisdom, one deliberately develops stable internal dispositions and ethical skills (virtue) rather than merely ensuring correct external behaviors or reward-driven performance. An Aristotelian understanding of practical wisdom includes the ability to deliberate well and both general and situation-specific understandings of the Good. Deliberating well requires a context-aware, value-guided reasoning process to inquire about contingent matters within a person’s power. Within this framing, responsible AI systems can have arguably reasonable situation-specific understandings of the Good, and large AI systems (e.g., LLMs) are gaining the capacity to reason better, but current efforts are not driving toward AI that could deliberate well and have a general understanding of the Good. Thus, we will examine both aspects in more depth in the following section, particularly a four-component psychological model of moral deliberation (Sect. 3.1) and a general theory of the Good drawing upon approaches to suffering from multiple ethical traditions (Sect. 3.2).
Developing artificial practical wisdom is both harder than and simpler than safe AGI. Deliberating well requires all the skills typically imagined for AGI, and moral reasoning in complex, real-world situations extends ‘general’ intelligence well-past the scientific and technical disciplines currently envisioned [40]. Few people would question the achievement of AGI should current GenAI chatbots obtain practical wisdom—which also requires truthful, appropriate, and responsible interactions. It also seems unlikely that obtaining currently envisioned AGI would result in what ethicists and moral philosophers would consider artificial practical wisdom. Thus, artificial practical wisdom is harder than AGI. However, current AGI research is driven by architectural innovations [41,42,43] and an increasing range of frontier benchmarks [44,45,46,47] that still fall short of even attempting the critical thinking taught in the humanities [48,49,50], with AI safety researchers striving to contain an unpredictable future with complex values and an unknown technical horizon [51, 52]. Against that broad, open-ended, and ill-defined landscape, artificial practical wisdom provides a simpler, tangible, well-studied problem to solve that may well be easier than prior AI research accomplishments in natural language processing, computer vision, and generative AI. Thus, artificial practical wisdom is simpler than AGI. Constructing an expert in practical wisdom, i.e., an artificial phronimos in Aristotelian ethics, would satisfy many hoped benefits of AGI systems and address many of its feared harms.
In summary, AI alignment traditionally focuses on ensuring external behaviors align with human values, but this approach struggles with ambiguity and lacks sensitivity to context. Ethical concerns and hopes for AI often manifest as either threats to increase or promises to reduce human suffering, respectively. A solution to AI/AGI causing suffering is to characterize harmful AI/AGI outcomes as suffering and address that directly rather than through proxies, such as measures of bias or existential risk, which may otherwise add complexity and obfuscation to the fundamental risk of increased suffering [53]. Ethically, by focusing on alleviating and addressing suffering as the core value for a general moral good, artificial practical wisdom offers both a clearer target for alignment than abstract AGI and a more direct response to ethical concerns about AI-driven harm. In particular, the approach directly addresses the risk that AGI (or artificial superintelligence) causes immense suffering [54] (outer alignment) and indirectly addresses existential risk (if annihilation is included in suffering). In terms of AI development, this approach is simultaneously more challenging than AGI (requiring sophisticated moral reasoning) and more tractable (providing well-defined ethical frameworks to implement).
3 Practical wisdom
Practical wisdom represents the capacity to make sound moral judgments in contextually-complex situations. Unlike theoretical knowledge or technical skill, practical wisdom involves the integration of moral perception, deliberation, motivation, and action guided by a conception of human flourishing. Drawing on philosophical and psychological frameworks, this section examines how practical wisdom can be conceptualized for AI systems.
Contemporary theories of practical wisdom vary in their emphasis on integration versus specialization of moral capacities. While some philosophers argue for a unified virtue that coordinates all moral excellences, others suggest that the complexity of moral life requires distributed ethical competencies. For artificial practical wisdom, we adopt a framework that combines both perspectives: developing specific moral capacities like compassion while maintaining an integrative architecture for coordinating ethical judgments.
3.1 Moral deliberation
Several philosophers and psychologists have developed theories of practical wisdom. Daniel Russell develops an Aristotelian approach to practical intelligence describing its dual role of correctly identifying the mean of the other virtues and then integrating them [55, 56]. From Russell’s perspective, phronesis would be essential to the full development of compassion, and the present article focuses primarily on that phronetic elaboration of compassion with brief consideration on what would be required for integration across multiple virtues. Christian Miller refers to Russell’s and similar neo-Aristotelian approaches as a Standard Model of practical wisdom, while identifying the wide range of functions that standard practical wisdom must perform and arguing that is too much for any one virtue, a position he calls the Eliminativist Model [57, 58]. An intermediate approach is the Aretai Model, which claims phronesis is a particular kind of expertise that fundamentally is what it means to be virtuous, though it manifests itself through clusters of individual moral virtues [59,60,61]. Although most of the subtleties are unnecessary for the current treatment, Miller’s analysis of phronetic functions could guide specification and development of artificial phronesis, and the Aretai Model clarifies the unique aspect of integration in phronesis that sets it apart from other virtues. In the Aretai model, phronesis is not just an additional virtue but a comprehensive capacity that integrates other virtuous characteristics (like justice and compassion) and makes them virtues (as even in Aristotelian approaches, virtues do not stand alone). Furthermore, Miller’s “handling conflicts” function of phronesis is needed for full Compassionate AI to resolve conflicts between potential actions that might address different types or aspects of suffering, but that integrative resolution process is not a function typically ascribed to compassion itself.
Finally, the Jubilee Centre Model takes an empirical approach to a standard neo-Aristotelian model of phronesis with four components: (i) a constitutive function of moral sensitivity that selects, identifies, and applies the appropriate virtue(s); (ii) an integrative function that coordinates and adjudicates the cognitive and affective aspects of situations to choose the best action among conflicting demands; (iii) an overall blueprint of moral identity that guides a person’s actions toward a flourishing life (e.g., eudamonia, a life of moral and psychological excellence in Aristotelian ethics) with resulting motivational force; and (iv) an emotional regulative function that infuses or guides one’s emotional response with reason in a harmonizing way [62, 63]. These four aspects of moral deliberation in practical wisdom—moral sensitivity, judgment, motivation, and integrated affective-cognitive regulation—structure psychological theories of phronesis [62, 64, 65] and go beyond the goal-directed problem solving and logical inference that most current AI systems target as reason [66]. We examine each in turn, as a foundation for AI moral deliberation in practical wisdom.
Moral sensitivity involves perceiving the ethical dimension of a situation with clarity and discernment. This perceptual skill enables the phronetic individual to recognize morally salient features and attune to moral cues and context, often requiring perspective-taking and empathic understanding of how one’s actions affect others. Moral sensitivity, and the other three components, could be implemented in AI with basic functionality currently available but additional efforts are needed to develop artificial practical wisdom. The detection and identification of morally salient features is a recognition task for which supervised machine learning approaches are generally well suited, but doing so in contextually complex scenarios requires additional development. For the present article, we restrict moral sensitivity to situations that involve suffering, and thus moral sensitivity requires identifying the various types of suffering occurring across a range of contexts, with possibilities and limitations discussed later in the article.
Moral judgment entails weighing competing values and principles, balancing and adjudicating conflicts among virtues in context, and determining appropriate means to worthy ends in complex situations [67]. Unlike simple rule application or algorithmic decision-making, moral judgment remains attentive to particular circumstances while maintaining fidelity to more general ethical commitments. The phronetic individual knows when to adjust general principles to fit particular circumstances without compromising ethical integrity. Moral judgment is key to the alignment problem, needed for adjudicating among conflicting objectives in inner alignment and aligning with the appropriate worthy ends in outer alignment. Existing approaches for resolving disparate results among AI mixtures of experts or among AI agents may serve as an initial foundation for structuring the adjudication of conflicts, though modeling the overall moral preferences requires a theory of the Good (discussed in the next subsection). From the perspective of AI alignment, the challenge is that moral judgment is both a tool for achieving alignment and something that itself needs to be aligned. This creates a recursive problem requiring some baseline alignment to develop trustworthy moral judgment capabilities, which can then be further developed to maintain and improve alignment at scale. For that baseline alignment in the present article, we describe adjudicating among possible objectives in terms of suffering and flourishing.
Moral motivation consists of prioritizing moral values above non-moral considerations and sustaining commitment to moral action despite challenges. Motivation both strengthens and depends upon moral identity, integrating moral concerns deeply in the individual’s sense of self and distinguishing practical wisdom from mere knowledge of ethical principles [65, 68,69,70,71]. Existing AI systems often use goals, utility functions, or reward models as the directive component closest to motivation, but those do not suffice for the dynamic and developmental aspects of moral motivations that are learned over time as people encounter new situations and perspectives. Phronetic individuals cultivate intrinsic motivation toward ethical action rooted in a personally meaningful conception of the good life. Human moral motivation stems from our embeddedness in relationships and communities and includes group belonging, care for specific others, social approval, and maintaining one’s moral identity within social contexts. This is more particular and contextual than abstract utility maximization. For the present article, we restrict moral identity to being compassionate (which also has a motivational element). Restricting motivations to alleviating and addressing suffering can initiate development of meta-learning systems that can revise their own objective functions based on experience and explicit models of human morality. At some level, AI systems may have behaviors that appear motivated in good directions without sufficiently genuine moral motivation. This problem occurs in purely human societies, too. The application to healthcare later in the article provides a context in which to explore these risks and benefits further.
Integrated affective-cognitive regulation is essential for channeling persistent motivations effectively, particularly when facing obstacles. In what Aristotle called the “doctrine of the mean,” appropriate emotional responses are calibrated to the ethical demands of the situation—neither suppressing emotions in favor of cold calculation nor allowing excess emotions to override reasoned judgment, but infusing emotion with reason. Thus, the phronetic individual feels appropriate emotions in proportion to ethical contexts—toward appropriate objects, for appropriate reasons, and to an appropriate degree. For the present article, we consider regulation more generally than only emotional. Recent neuroscientific findings describe human emotions as the result of an evaluative appraisal process directly tied to our interpretation of events, such as, whether an object or situation is potentially beneficial or harmful, relevant or irrelevant to a goal, expected or unexpected, etc [72, 73]. Thus, cognitive-affective regulation is ultimately about evaluation. These evaluative systems guide moral judgment [67] and the internal conflicts and tensions that occur in multi-objective systems. Functional regulation and integration for AI systems likely requires hierarchical architectures to separate the possible responses for the given, possibly ambiguous, context; the adjudication of possible responses; and the regulatory mechanisms that guide those judgments [74,75,76]. Regulation also implies some intrinsic uncertainty in the evaluations of possible responses, which itself may require quantification. Although significant work would be required, the attention mechanisms and transformers of current LLMs do represent weighted attentional perspectives and maintain the long-term dependencies like those required for temporal emotional regulation, and multi-agent architectures can represent different stakeholder viewpoints [77].
In sum, phronetic deliberation is an improvisational skill characterized by nuanced sensitivity to the value-laden features of situations that persistently goes beyond mere pattern recognition and procedural reasoning, dynamically identifying morally appropriate responses in contextually complex scenarios. It requires the integration of four key capacities: moral sensitivity to perceive ethical dimensions, moral judgment to adjudicate among competing values, moral motivation to sustain ethical action, and affective-cognitive regulation to calibrate responses appropriately. These capacities must work in concert, and require guidance by a general understanding of the Good that provides coherent direction while remaining responsive to situational particulars. For AI systems, implementing such deliberation requires architectures that can maintain multiple perspectives, handle uncertainty in moral evaluation, and adapt their responses based on contextual factors.
3.2 General theory of the good
A general theory of the Good suitable for AI practical wisdom must be general enough to orient practical wisdom, similar enough to human conceptions to engender trust, broad enough to encompass the range of human conditions, and precise enough to enable implementation in computational context. Such a theory can draw from historical accounts of the Good as well as contemporary ethical resources, including ethical principles [7, 78], social processes [8, 79,80,81], capabilities [[82](#ref-CR82 “Nussbaum, M.