Trust Engine
Make Credibility Expensive—So Trust Becomes Visible
A reputation-staking protocol that restores the cost of being wrong on the internet.
The Problem
Society is fracturing—and we’re losing the ability to talk to each other.
Social platforms are engineered for engagement, not accuracy. Engagement is maximized by content that triggers fear and anger. This isn’t a bug—it’s the business model. The algorithm doesn’t ask "is this true?" It asks "will this keep them scrolling?"
The result: platforms actively reward content that is negative for society.
The Polarization Trap
Research shows affective polarization—distrust and dislike of "the other side"—is rising sharply. We’ve stopped debating policy. We can’t even agree on facts. Groups th…
Trust Engine
Make Credibility Expensive—So Trust Becomes Visible
A reputation-staking protocol that restores the cost of being wrong on the internet.
The Problem
Society is fracturing—and we’re losing the ability to talk to each other.
Social platforms are engineered for engagement, not accuracy. Engagement is maximized by content that triggers fear and anger. This isn’t a bug—it’s the business model. The algorithm doesn’t ask "is this true?" It asks "will this keep them scrolling?"
The result: platforms actively reward content that is negative for society.
The Polarization Trap
Research shows affective polarization—distrust and dislike of "the other side"—is rising sharply. We’ve stopped debating policy. We can’t even agree on facts. Groups that once shared a common reality now view each other as suspicious adversaries.
The Anonymous Speech Problem
For 200,000 years, speech had consequences. Spread lies in a village, and your reputation suffered. The community remembered. Social media broke that contract. Now you can post misinformation with zero cost—and the algorithm will reward you for it.
The Downstream Damage
AI systems are being trained on this polluted information environment. Models inherit our confusion, our polarization, our inability to distinguish signal from noise. We’re encoding dysfunction into the infrastructure of the future.
The Insight
This isn’t a new problem. And it isn’t a new solution.
"If you bring value to the group, you will be rewarded. If you destroy value, your star will fall."
This is how human communities have functioned for all of recorded history. Reputation systems emerged because they work—they enable cooperation at scale by making trustworthiness visible and defection costly.
Trust Engine is a mathematical formalization of this ancient wisdom.
Robin Dunbar (Oxford) argues that language itself evolved primarily for sharing reputational information—"vocal grooming" that enabled cooperation in groups too large for physical bonding. Two-thirds of human conversation is about who did what to whom. This isn’t idle gossip. It’s the infrastructure of cooperation.
Cross-cultural research confirms this is universal. From hunter-gatherers in the Central African Republic to knowledge workers in the US—reputation determines resource allocation. Gossip enforces norms. Ostracism punishes defectors. These mechanisms emerged independently across every human society because they solve a fundamental coordination problem.
Social media broke these mechanisms by removing the cost of bad-faith participation. Trust Engine restores them.
The Mechanism
Costly signaling separates those who know from those who guess.
The core insight from evolutionary game theory: signals must be costly to be honest. When being wrong costs you something, people get more careful—and more honest.
Reputation Staking
Users stake accumulated reputation when vouching for content. Correct assessments (validated over time) earn reputation rewards. Incorrect stakes cost reputation—publicly and permanently.
Weighted Signals
Costless drive-by opinions carry no weight. High-stake assessments from users with track records of accuracy carry more. The signal-to-noise ratio improves because noise becomes expensive.
Two-Dimensional Voting
Unlike binary upvote/downvote, Trust Engine separates direction (credible vs. not credible) from conviction (willing to stake on it):
| | I Stake Reputation | No Stake | | | —————— | –––– | | Credible | High-conviction positive signal | Low-weight opinion | | Not Credible | High-conviction negative signal | Low-weight opinion |
This reveals not just what people think, but how confident they are—and whether they’re willing to back it up.
Tokenomics
Two Distinct Assets
Reputation Score (RS) is your active credibility stake. It’s earned through accurate assessments, used for voting (staked when you vouch for content), and non-transferable. This is your skin in the game.
Epistemic Coin (EPIC) is your crystallized reward. RS automatically and gradually converts to EPIC over time (rate determined by algorithm). Once converted, it cannot be wagered—it’s out of the game. But it’s tradeable or holdable in your coin-purse. This is your payout.
The Conversion Mechanic
RS doesn’t convert all at once. The algorithm gradually matures your earned RS into EPIC over a defined period. This creates important dynamics:
Continuous Earning Required
You must keep earning new RS to maintain voting power. You can’t stockpile RS indefinitely and dominate.
Use It or Convert It
Credibility is "use it or lose it"—but you don’t lose value. Inactive RS converts to EPIC, rewarding you while freeing up influence for active participants.
Long-Term Accumulation
Sustained accurate participation accumulates EPIC over time. The longer you’re right, the more you’re rewarded.
Revenue → Buybacks → Virtuous Cycle
Platform revenue (AI data licensing, platform fees, enterprise API) funds EPIC token buybacks:
Accurate assessments → Earn RS ↓ RS matures → EPIC (gradual, automatic) ↓ Platform value grows → Revenue ↓ Revenue → EPIC buybacks (treasury) ↓ EPIC appreciates ↓ Greater incentive to earn RS ↓ More quality participation → [cycle repeats]
The elegance: You can’t just buy EPIC and stake it—EPIC doesn’t vote. You must earn RS through accuracy to participate. But EPIC holders benefit from the collective accuracy of the network driving platform value.
Gamification
Modes of Engagement
Not all participation is equal. Signal quality—and reward—scales with commitment:
Interact / Like Low signal Cheap. Zero risk. Minimal reward.
Share / Comment Medium–High signal Social cost. Effort. Moderate stakes.
Rate + Stake Very high signal Two-dimensional voting. RS on the line. Real skin in the game.
Create an Outpost Maximum signal Pull content from other platforms with commentary, or stake an original claim. Creator risks their reputation. Highest reward potential.
The name Outpost is deliberate. These are forward positions—scouting claims from the broader information landscape and bringing them into the system for evaluation. The creator stakes their credibility on the claim’s veracity. This is where the Trust Engine begins its work.
The Medieval Progression
Content progresses through defensive tiers based on total reputation staked and the degree of controversy—a mathematical measure of how contested the position is:
Outpost → Watchtower → Stronghold → Fortress → Castle
The metaphor encodes meaning. A claim at Outpost is new, untested, easily challenged. A claim at Castle has weathered sustained engagement—many people have staked their credibility on it. You can see at a glance how entrenched an idea is.
This creates powerful dynamics:
Visible Entrenchment
You immediately understand how difficult it would be to overturn a claim. A Castle-level belief isn’t just popular—it’s fortified by accumulated stakes from people willing to risk their reputation on it.
Adversity Drives Value
In idea warfare, opposition makes victory meaningful. Successfully challenging an entrenched position—and being proven right—earns outsized returns. Difficulty creates reward.
Flipping Castles
The highest-value contribution isn’t building consensus—it’s overturning false consensus. Proving a Castle wrong is worth as much as building it was. The system rewards those who correct deeply entrenched errors.
Consequence Scales with Stakes
Being correct about something easy is worth little. Being correct about something contested and important—where significant reputation is on the line—is worth a lot. The system rewards courage, not just accuracy.
Attack Resistance
Stress-tested against adversarial conditions
The protocol has been mathematically modeled against known attack vectors:
- ✓
Coordinated manipulation (Sybil attacks) Reputation is earned, not granted. Creating fake accounts doesn’t create fake credibility. Attacking at scale is prohibitively expensive.
- ✓
Epistemic capture Diverse stakeholder incentives prevent groupthink lock-in. Minority positions that prove correct earn outsized returns, creating incentives to challenge consensus.
- ✓
Whale attacks Quadratic staking mechanics limit plutocratic influence. Doubling your stake doesn’t double your signal weight.
- ✓
Delayed consensus Time-decay functions handle claims that resolve slowly. Stakes can be adjusted as evidence emerges.
Why Now
AI Training Data Crisis
Every major lab is desperate for quality signals. Synthetic data is hitting walls. Trust-weighted content scores are a potential game-changer for data curation.
Platform Trust Collapse
Users actively seek credibility tools. The major platforms have lost the benefit of the doubt. There’s demand for new infrastructure.
Staking Mechanics Mature
Crypto infrastructure has battle-tested the primitives we need. Staking, slashing, and reputation tokens are well-understood patterns.
Regulatory Tailwinds
EU and US pushing for content accountability. Platforms will need credibility signals whether they want them or not.
The Theoretical Foundation
Convergent validation from independent research traditions
This isn’t speculative design. The mechanisms we’re formalizing have been validated by:
Anthropology: Cross-cultural studies confirm reputation-based resource allocation works identically across US office workers, Indian professionals, and Central African hunter-gatherers (WSU, 2023).
Evolutionary Game Theory: Costly signaling theory (Zahavi, Grafen) explains why expensive signals are honest signals. Third-party punishment serves as a costly signal of trustworthiness (Jordan et al., Harvard).
Indirect Reciprocity: Richard Alexander’s framework—"I help you, someone else helps me, mediated by reputation"—is unique to humans and develops in early childhood. It’s evolutionarily stable under specific social norms.
Ancient communities figured out reputation systems through trial and error. Modern researchers independently derived the same principles mathematically. The convergence is what makes this compelling—not new theory, but validated infrastructure.
The Trust Engine makes credibility the coin of the realm.
We’re building the infrastructure that social media accidentally broke. Interested in discussing the protocol, the math, or potential applications?