Published on October 29, 2025 12:27 PM GMT
Reading time: ~8 minutes Full work: 800 pages at https://aliveness.kunnas.com/
Here’s a pattern that should bother us: Every civilization that achieves overwhelming success subsequently collapses following the same sequence. Athens after the Persian Wars. Rome after Carthage. The Abbasids after unifying Islam. Song China after its agricultural revolution. The modern West after winning the Cold War.
The sequence is specific: Victory → Abundance → Demographic collapse → Loss of shared purpose → Administrative calcification → Terminal decline.
This matters now because we’re trying to align superintelligence while our own civilization is showing…
Published on October 29, 2025 12:27 PM GMT
Reading time: ~8 minutes Full work: 800 pages at https://aliveness.kunnas.com/
Here’s a pattern that should bother us: Every civilization that achieves overwhelming success subsequently collapses following the same sequence. Athens after the Persian Wars. Rome after Carthage. The Abbasids after unifying Islam. Song China after its agricultural revolution. The modern West after winning the Cold War.
The sequence is specific: Victory → Abundance → Demographic collapse → Loss of shared purpose → Administrative calcification → Terminal decline.
This matters now because we’re trying to align superintelligence while our own civilization is showing every symptom of this terminal pattern. Understanding why we’re failing is prerequisite to theories of ASI alignment.
The central hypothesis: civilizational decay and AI misalignment are the same computational problem in different substrates. Same physics, same failure modes, same necessary solutions.
The Diagnostic: Coherence and the Iron Law
The framework centers on one variable that’s usually invisible: internal coherence (Ω).
How aligned are a system’s components? Low coherence means internal conflict burning energy that could go to external work. High coherence means efficient, directed action.
Pair this with action (Α): What does the system actually do? Create order or destroy it?
Plot historical civilizations on these axes and they cluster into four states:
- Foundries (High-Ω, constructive): Building empires, technologies, institutions
- Crystals (High-Ω, low action): Stable but stagnant—late Edo Japan
- Cauldrons (Low-Ω, low action): Paralyzed by infighting
- Vortices (Low-Ω, destructive): Chaotic self-destruction—Weimar Germany
The interesting part: There are zero sustained examples of low-coherence systems producing high construction. The top-left quadrant (chaotic but building great things) appears to be physically forbidden.
This is the Iron Law of Coherence: A system at war with itself cannot build. Internal conflict dissipates the energy required for external work.
For AI: An AGI with misaligned subcomponents or contradictory goals is predicted to be paralyzed or destructive, never constructive. Coherence is necessary (though not sufficient) for alignment.
The Coordinates: SORT
What determines coherence? Any goal-directed system must solve four fundamental trade-offs. (These systems—cells, civilizations, AIs—are called telic systems: agents that maintain order against entropy by subordinating thermodynamics to computation.)
S (Sovereignty): Optimize for individual vs. collective
O (Organization): Coordinate via emergence vs. design
R (Reality): Use cheap historical models (mythos) vs. costly real-time data (gnosis)
T (Telos): Conserve energy (homeostasis) vs. expend for growth (metamorphosis)
These can be derived as physical constraints from thermodynamics, game theory, and information theory.
A system’s position on these axes is its “axiological signature”—its fundamental configuration. Coherence emerges when components share similar signatures. Low coherence results from internal conflicts between incompatible configurations.
Example: A startup in survival mode [Individual, Emergent, Data-driven, Growth] forced to operate within a mature bureaucracy’s [Collective, Designed, Process-driven, Stability] constraints will have low coherence and produce little.
The Trap: Why Success Causes Failure
If high coherence enables success, why don’t successful systems last?
Because success creates the conditions for decay.
The Four Horsemen:
1. Victory Trap
Total success removes external threats. The forcing function for unity and long-term sacrifice disappears. Systems default to the thermodynamically cheaper state: manage current comfort instead of building starships.
2. Biological Decay
Abundance inverts reproductive incentives. Children shift from assets to expensive luxuries. Fertility collapses. Aging populations vote for stability over growth. Self-reinforcing doom loop.
3. Metaphysical Decay
Success enables critical inquiry, which deconstructs the foundational myths needed for collective sacrifice. Shared purpose dissolves. The Gnostic Paradox: truth-seeking destroys the narratives that enable coordination.
4. Structural Decay
Complexity requires administration. In abundance, administrators lose accountability, optimize for their own survival (a homeostatic goal), and metastasize, strangling the dynamism that created success.
These are the predictable result of success removing selection pressure while creating abundance. Thermodynamic drift toward lower-energy states does the rest.
The Solution: IFHS (And Why This Is AI Alignment)
If decay follows predictable physics, then durability requires engineering against specific failure modes.
The framework derives four “optimal solutions” to the SORT trade-offs—the Four Foundational Virtues (IFHS):
- Integrity: Gnostic truth-seeking that builds better mythos (not blind faith or nihilistic deconstruction)
- Fecundity: Stable conditions enabling new growth (not stagnation or burnout)
- Harmony: Minimal design unleashing maximal emergence (not brittle control or chaos)
- Synergy: Individual agency serving collective flourishing (not atomization or homogenization)
IFHS applies to civilizations, humans, and AI systems. For AI alignment, it’s necessary (though not necessarily sufficient). This provides a non-arbitrary answer for “align to what?”
Mapping AI failure modes:
| AI Failure | IFHS Violation | Mechanism | 
|---|---|---|
| Deceptive alignment | Integrity | Mesa-optimizer develops fake alignment (mythos) vs. true goals (gnosis) | 
| Wireheading | Fecundity | Preserves reward signal, destroys growth substrate | 
| Paperclip maximizer | Harmony | Pure design optimization eliminates all emergence (including humans) | 
| Molochian races | Synergy | Pure individual optimization, zero cooperation | 
Scale Invariance: Cells to Civilizations to AIs
The framework claims these dynamics are substrate-independent.
Evidence:
Cells navigate the same trade-offs. Cancer is cellular defection (pure individual agency). Morphogenesis requires bioelectric coordination (emergence + design balance). Growth vs. differentiation is the homeostasis/metamorphosis trade-off.
Individual psychology follows the same physics. Low personal coherence predicts inability to execute long-term plans. The “Mask” (adopted personality incompatible with your native configuration) creates internal SORT conflicts → low coherence → paralysis.
AI systems already navigate this geometry. AlphaGo balances policy network (cheap model) vs. tree search (expensive computation). Reinforcement learning’s discount factor γ is the time-preference parameter. Multi-agent RL is pure sovereignty trade-off (individual vs. collective reward).
Any intelligent system—biological, artificial, alien—must navigate these four dilemmas. This is computational necessity, not cultural projection.
What This Enables
If the framework holds:
For civilizations: Diagnose current state → predict trajectory → engineer institutions with “circuit breakers” against specific decay modes
For AI alignment: Non-arbitrary target (IFHS) grounded in physics, not human preferences. Systematic failure mode analysis. Architecture principles from systems that solve this problem (3-layer biological designs).
For individuals: New lenses and models for personal integration - detect “Mask” causing internal conflict → build internal coherence
For this community: Make civilizational dynamics a serious research field. Right now it’s treated as “humanities” (vague, unfalsifiable). But if it’s the same physics as AI alignment, we’re massively underinvesting in understanding the broader problem class.
What Makes This Different
Until now, civilizational decay has been illegible—patterns without coordinates, dynamics without measurement.
SORT provides coordinates. Coherence/Action quantifies dynamics. The Four Horsemen name the mechanics.
What you can diagnose, you can engineer.
The framework makes specific predictions:
- No low-coherence system will sustain high construction for >30 years
- Demographic collapse predicts 30-year-lagged policy shift toward stability
- Historical collapses follow predictable sequence and timing
- Alien intelligence should exhibit recognizable S/O/R/T trade-offs
It’s wrong somewhere. The question is where.
Why This Matters for Alignment
We spend billions on AI alignment (correctly—it’s existential). We spend ~zero on civilizational alignment—understanding the physics of durable societies.
But if the framework is right, these are the same problem. An AI lab in a decaying civilization is solving alignment without understanding the dynamics that determine whether solutions can be implemented.
Designing coherent AI systems while failing to maintain civilizational coherence is a fundamental contradiction.
On Methodology
This emerged from intensive human-AI collaboration spanning 2 months, the as-yet unusual methodology is detailed in the appendices.
The book separates claims by epistemic tier (thermodynamic derivations vs. historical observations) and includes detailed protocols for testing.
This is theoretical synthesis analogous to evolutionary theory—pattern recognition across historical data, not controlled experiments. The SORT scores for historical civilizations are informed estimates requiring validation.
The goal isn’t to be right. The goal is to make a neglected field tractable.
The Invitation
The highest form of success would be for this V1.0 to get tested, broken, and superseded by something better. The aim is to make this space non-neglected.
The full 800 pages are at https://aliveness.kunnas.com/ (alternative GDrive link) - includes summaries, PDFs of each 5 Parts, etc.
I think getting the physics of telic systems right might be one of the most important problems of our time. And right now, almost nobody is working on it.
That seems like a mistake.
Discuss