Why philosophy’s most famous ethical dilemma is a poorly-posed question—and how to actually solve it
The Problem
A runaway trolley barrels toward five people tied to the tracks. You stand at a lever. Pull it, and the trolley diverts to a side track—killing one person instead of five.
What do you do?
Philosophers have debated this for sixty years. Utilitarians say pull the lever (5 > 1). Deontologists say don’t pull it (killing is wrong). Virtue ethicists ask what a virtuous person would do (which is circular).
Here’s the engineer’s response: This is a poorly-posed question with insufficient data.
Let me show you why—and how to solve it properly.
I. The Missing Variables (The Fecundity Audit)
The trolley problem treats people as interchangeable unit…
Why philosophy’s most famous ethical dilemma is a poorly-posed question—and how to actually solve it
The Problem
A runaway trolley barrels toward five people tied to the tracks. You stand at a lever. Pull it, and the trolley diverts to a side track—killing one person instead of five.
What do you do?
Philosophers have debated this for sixty years. Utilitarians say pull the lever (5 > 1). Deontologists say don’t pull it (killing is wrong). Virtue ethicists ask what a virtuous person would do (which is circular).
Here’s the engineer’s response: This is a poorly-posed question with insufficient data.
Let me show you why—and how to solve it properly.
I. The Missing Variables (The Fecundity Audit)
The trolley problem treats people as interchangeable units. Five humans = five units of value. One human = one unit of value. 5 > 1, therefore pull the lever.
(Sophisticated utilitarians use QALYs or similar metrics to weight by expected future utility. This is closer, but still optimizes the wrong variable—we’ll see why below.)
This is the first failure of reasoning.
People are not fungible. They have different capacities to generate organized complexity over time—different futures, different potentials, different abilities to create value, meaning, and further life.
In the Aliveness framework, we call this Fecundity: the capacity to create stable conditions for sustained growth over deep time.
The correct metric is not Σ(Headcount). It’s Σ(Future Aliveness Potential).
Scenario A: The Headcount Trap
Track A (5 people): Five terminally ill, post-reproductive individuals with weeks to live. Track B (1 person): A 25-year-old biomedical researcher on the verge of a cancer treatment breakthrough.
Utilitarian calculus: 5 > 1, pull the lever, kill the researcher. Aliveness calculus: The researcher has orders of magnitude higher future potential. Don’t pull the lever.
Scenario B: The Age Gradient
Track A (5 people): Five children, ages 6-10. Track B (1 person): An 80-year-old retiree.
Utilitarian calculus: 5 > 1, pull the lever. Aliveness calculus: Five children represent vastly more future complexity generation than one person at end of life. Pull the lever.
The utilitarian gets the same answer in both cases (5 > 1). The Aliveness framework gets different answers because it’s measuring the right variable.
The principle: An Aliveness-aligned agent’s first duty is Integrity—demand data before deciding. The moral failure is choosing to act in an information vacuum.
The correct first response to the trolley problem is: “Insufficient data. What are the ages, capabilities, and generative potentials of the people involved?”
This is not a Rawlsian “veil of ignorance” designed to reveal universal principles. It’s a cognitive cataract that guarantees suboptimal decisions. In the real world, seeking information is always the first moral act.
II. The Missing System (The Synergy Audit)
Even if we solve the Fecundity calculation, we’re not done. The decision doesn’t occur in a vacuum. Every action has second-order effects on the social system.
This is where philosophers’ intuitions start to diverge between variants of the problem—and they don’t know why.
The Fat Man Variant
Same setup: trolley heading toward five people. But now there’s no lever. Instead, you’re on a bridge above the tracks with a very large man. If you push him off the bridge, his body will stop the trolley, saving the five. He dies, they live.
Same math as before: 5 > 1. But most people refuse to push him.
Philosophers have spent decades asking: Why does this feel different?
The Aliveness framework has the answer.
Deontological Rules as Load-Bearing Infrastructure
Societies need Synergy—the capacity for low-cost cooperation at scale. Synergy requires trust. Trust requires predictability. Predictability requires rules.
“Don’t kill” is not a mystical commandment. It’s a constitutional principle that maintains social coherence (Ω)—internal alignment that enables coordinated action. When you violate it, you damage the trust substrate that enables all future cooperation.
But different violations have different costs:
Pulling the lever: You violate “don’t kill” by choosing an action that results in death. This damages social coherence (trust decreases). Cost: moderate.
Pushing the fat man: You violate something deeper—the premise that persons are not objects to be used as tools. This is a constitutional-level violation. If people believe you might shove them in front of trolleys when the math works out, trust collapses catastrophically. Cost: civilization-threatening.
The intuition that pushing the fat man is worse is correct. Your moral instincts are performing a Synergy calculation your conscious mind can’t articulate.
Sometimes preserving the integrity of the system (Synergy) is worth more than the immediate local gain (Fecundity).
The Second-Order Calculation
Imagine everyone knows trolley-problem-style calculations are acceptable. What happens?
- Organ harvesting becomes justifiable (kill 1 healthy person, harvest organs, save 5)
- Strategic killing of low-value individuals becomes normalized
- Trust collapses: anyone might be sacrificed if the math favors it
- Transaction costs skyrocket: every interaction requires adversarial calculation
- Cooperation becomes impossible
A society that casually violates “persons are ends, not means” stops being a society. It becomes a low-trust Hobbesian nightmare where Synergy is impossible.
The principle: The decision is not just about the individuals on the tracks. It’s about the constitutional substrate that enables all future cooperation. This is an Integrity calculation (using Gnostic analysis to recognize that deontological rules encode real coordination wisdom) informing a Synergy decision (preserving the trust substrate), sometimes at the cost of immediate Fecundity (headcount). Multiple virtues in tension.
III. The Missing Observer (The Diagnostic)
Here’s the final twist: different people will solve this problem differently—and both can be right.
The trolley problem is not a test to find “the right answer.” It’s a diagnostic tool that reveals what you terminally value.
Two Valid Solution Paths
Path A: The Gnostic Architect (R+ dominant)
Prioritizes truth-seeking and empirical calculation. Runs the Fecundity math, accepts Synergy damage as manageable cost. “Yes, pulling the lever violates a norm. But the numbers are clear. Accept the social damage, save net four lives.”
Conclusion: Pull the lever (in the simple case). Don’t push the fat man (social cost too high).
Path B: The Communal Gardener (S+/R- dominant)
Prioritizes social cohesion and constitutional integrity. Defaults to the proven heuristic: “Do not kill.” Views Synergy preservation as the highest value. “The system that allows casual calculation of who dies is more dangerous than five deaths.”
Conclusion: Don’t pull the lever. The rule matters more than the outcome.
Neither is wrong. They’re optimizing different variables in the multi-dimensional Aliveness function.
The Architect sees five versus one and calculates. The Gardener sees the constitutional bedrock cracking and refuses to participate.
Both are preserving Aliveness—just different aspects of it.
IV. The Actual Solution
The Aliveness framework doesn’t give you a single answer to the trolley problem. It gives you the correct procedure for solving it:
- Demand data (Integrity): What are the ages, capabilities, and potentials of the individuals? Refusing to decide in an information vacuum is the correct first move.
- Calculate Fecundity: Given the data, which choice maximizes Σ(Future Aliveness Potential)?
- Calculate Synergy cost: What is the damage to social trust and constitutional integrity? Sometimes preserving the system is worth more than the local gain.
- Weight by your architecture: Are you optimized for growth (Fecundity) or cohesion (Synergy)? Both are valid. Different decision architectures (your personal axiological configuration—what you’re built to value) will—and should—arrive at different answers.
The meta-answer: The trolley problem reveals that ethics is not about finding universal rules. It’s about engineering decision architectures that can navigate multi-dimensional trade-offs under uncertainty.
V. Why Philosophy Got Stuck
Sixty years of debate. Thousands of papers. No resolution.
Why?
Because every major ethical framework is incomplete:
- Utilitarianism: Optimizes Fecundity (maximize total utility) but ignores Integrity, Harmony, and Synergy. Leads to organ-harvesting and wireheading.
- Deontology: Optimizes Synergy (preserve rules/trust) but ignores Integrity (which rules encode truth?), Fecundity (growth capacity), and Harmony (adaptation). Can derive rules from universalizability, but can’t ground *why* universalizability matters without circular appeal to intuition.
- Virtue Ethics: “Do what a virtuous person would do” — but what makes someone virtuous? Potentially captures all four virtues through ‘excellence,’ but provides no procedure for deriving which virtues matter or resolving conflicts between them. Circular again.
None of them have a physics-based foundation. They’re all operating on cultural intuitions dressed up as universal principles.
The Aliveness framework solves this by grounding ethics in thermodynamics:
Aliveness is the state of being a net creator of organized complexity over deep time. Achieving this requires simultaneously optimizing Integrity (truth-seeking), Fecundity (future potential), Harmony (adaptive balance), and Synergy (cooperation substrate).
The trolley problem is revealed as a toy example of the real challenge: multi-objective optimization under uncertainty.
VI. The Real Work
Stop debating trolleys. Start building better decision architectures.
The trolley problem is philosophy’s equivalent of arguing about how many angels fit on a pin. It’s a distraction from the actual work:
Axiological engineering: Designing agents (human and artificial) and institutions (constitutions, governance systems) that can perform complex, multi-variable Aliveness calculations in real-time.
This means:
- Building AI systems that balance all four virtues (IFHS), not just maximize a single metric
- Designing constitutions with circuit-breakers against decay
- Creating governance that maintains Integrity, Fecundity, Harmony, and Synergy simultaneously
- Training humans to recognize when they’re operating in information vacuums and demand data
The stakes are higher than five people on a track. We’re building AGI. We’re designing Mars colonies. We’re engineering civilizational operating systems.
If we can’t move past “should I pull the lever?” to “how do we build systems that navigate multi-dimensional trade-offs?”, we’re not ready for what’s coming.
Conclusion
The trolley problem is not irresolvable. It’s poorly posed.
The correct response is not a choice between pulling and not pulling. It’s demanding the missing variables, calculating multi-dimensional trade-offs, and recognizing that different architectural types will—validly—arrive at different answers.
Ethics is not about finding the One True Answer. It’s about building systems capable of navigating complexity.
The philosopher asks: “What should I do?”
The engineer asks: “What information do I need, what are the second-order effects, and how do I build a decision architecture that handles this class of problem reliably?”
One approach produces sixty years of circular debate.
The other produces solutions.
(via SMBC by Zach Weinersmith)
This draws from Aliveness: Principles of Telic Systems, a physics-based framework for engineering durable civilizations, AI alignment, and integrated human flourishing. Full book at aliveness.kunnas.com.