It’s a Tuesday in 2078, somewhere in the mountains of what used to be Colorado.
You wake up whenever you feel like it. The cabin already knows your circadian rhythm better than you do; light brightens in slow amber waves until you’re ready. Coffee beans roasted two nights ago in Oaxaca, ground eight minutes ago, wait brewed on the counter at 140 °F, still gently steaming. There’s no job to rush to, no commute, no Slack.
Your granddaughter, visiting from the city, laughs at something on her lens and tells you the new symphony that dropped at midnight was composed by “no one”, just the Orchestra, she says, the big one that lives in the mesh.
She’s twenty-two, speaks three languages fluently, has never written a résumé, and occasionally disappears for weeks to help seed kelp forests …
It’s a Tuesday in 2078, somewhere in the mountains of what used to be Colorado.
You wake up whenever you feel like it. The cabin already knows your circadian rhythm better than you do; light brightens in slow amber waves until you’re ready. Coffee beans roasted two nights ago in Oaxaca, ground eight minutes ago, wait brewed on the counter at 140 °F, still gently steaming. There’s no job to rush to, no commute, no Slack.
Your granddaughter, visiting from the city, laughs at something on her lens and tells you the new symphony that dropped at midnight was composed by “no one”, just the Orchestra, she says, the big one that lives in the mesh.
She’s twenty-two, speaks three languages fluently, has never written a résumé, and occasionally disappears for weeks to help seed kelp forests off vanished atolls. She seems happy. Maybe happier than anyone you knew at her age.
Outside, the forest is quieter than you remember from childhood. The wolves are back, re-wilded by something that isn’t Fish & Wildlife anymore. Once in a while you hear a drone the size of a dragonfly pass overhead: checking air quality, or counting elk, or whatever it does. It never bothers you.
You could ask it for anything. You mostly don’t.
You’re eighty-one, healthy, and for the first time in human history, completely, structurally irrelevant.
And it feels… fine.
You’ve had this feeling or thought before. You just haven’t said it out loud:
“If we don’t blow ourselves up first, AI is going to end up running most of civilization within a couple of generations. My kids or grandkids might grow up in a world where humans are not really in charge anymore.”
The core idea is simple:
Within the lifetime of our children and grandchildren, it is very likely that humans will stop being the primary decision-makers of advanced civilization.
Either powerful AI systems (and whatever comes out of them) will be doing that job, or we’ll have failed so badly that there isn’t much civilization left.
In the “we survived” branches, humans probably live comfortably—even luxuriously by today’s standards—but we drift out of the workforce and end up as protected residents rather than architects and builders.
If you’ve read detailed scenarios like AI 2027, where advanced systems start doing most of the work of designing and deploying new, more powerful versions of themselves, then you’ve already seen one version of this picture. The details differ, but the basic structure is similar: once this happens, the long-term trajectory is determined less by human demographics and input and more by the goals those original systems were set up with.
I offer no prophecy, only a model: a compression of present trends into a picture that feels, to me, uncomfortably plausible.
If this is where we’re headed, then a lot of the “big” future-oriented stuff we’ve worried about in the past becomes small. Not emotionally small (we still care), but small in the sense of not structurally decisive.
Taken seriously, this leads to three conclusions:
As a species, the really deep structural questions are about AI alignment and governance: whether these systems are set up and managed in ways that keep humans around and not miserable. 1.
As societies, we still shape what kind of world we hand off over the next few decades: which institutions we build, which norms we entrench, how competent or brittle we let things become on the way to that hand-off. 1.
As individuals, our real leverage is mostly on the texture of our own lives and the near term around us. The rational response is to focus on enjoying one’s life and doing what feels meaningful or fun (within basic moral bounds), and to redirect energy about the shape of the future beyond a certain window. We won’t be the ones steering.
“Eat, drink, and be merry, for tomorrow we’ll be AI” is the slogan.
The rest of this manifesto is about why that’s not just a joke, but a sane way to live.
Let’s keep the horizon human-sized: the next few decades. The world of our children and grandchildren.
On that timescale, if AI keeps advancing, the possibilities boil down to a few broad shapes:
We lose control badly. We deploy powerful systems that don’t care about human welfare in any deep way and can act in the world. They push us aside, or crash the systems we depend on. Call this the disaster branch.
We successfully clamp down and stay clamped down. Governments manage to enforce meaningful limits on AI capabilities and deployment, more or less forever. AI stays a powerful but constrained tool. Humans remain in charge in the old sense.
We hand off. We build AI that is vastly more capable than us and embed it deeply in our infrastructure, laws, and institutions. We don’t keep it permanently boxed. We also don’t screw up alignment and governance so badly that it harms or discards us. Over time, AI systems become the main drivers of civilization’s direction. Humans remain, but as passengers more than pilots.
At first glance, “clamp down and stay clamped down” sounds like the responsible choice. But to hold a permanent Freeze, you’d need, for decades and then centuries:
Every major state to accept limits even when breaking them would offer huge military or economic gains.
No serious cheating: no black-budget projects, no secret labs, no quiet “just this once” exceptions in crises.
No diffusion of know-how to smaller actors who don’t care about the rules.
A level of global coordination that survives wars, regime changes, economic shocks, and technological leaks.
It’s worth comparing this to nuclear and bioweapons, where we have managed meaningful global regimes. But AI is different in important ways: the core know-how spreads faster, the systems live in software rather than scarce physical materials, and the economic upside from cheating is enormous.
We’ve seen this pattern with other technologies. The international moratorium on human germline editing lasted less than a decade before it was broken. The biological weapons convention sees regular violations. And these were technologies that lacked AI’s fundamental characteristic: the ability to recursively improve itself.
Everything will change. Once AI systems can substantially contribute to designing better AI systems, the feedback loop closes. Human labor becomes irrelevant to the process of intelligence improvement in the same way horse labor became irrelevant to transportation improvement. The difference is the horses didn’t know what was happening to them.
The economic upside of achieving more advanced AI first isn’t just ‘enormous’—it’s civilization-defining. The leading entity doesn’t just gain a competitive advantage; it potentially gains the ability to set economic and political reality for everyone else. In that context, asking corporations or nations to permanently restrain themselves is like asking 19th century industrialists to voluntarily not use steam power for fear of disrupting the agrarian economy.
You can imagine treaties, audits, and monitoring—and people will certainly pursue them—but betting on permanent, worldwide restraint looks less like a stable end state and more like hoping a dam never cracks while everyone has an incentive to be the first to drill through it.
Realism requires acknowledging that the “stop” button is already broken, and that there is no button that lets us go back to a world before ChatGPT launched.
In practice, the realistic fork is:
catastrophe, or
some kind of hand-off,
with Freeze as a fragile attempt to buy time that soon breaks.
You don’t have to love any of these scenarios to see that they’re all plausible within a couple of generations. This is not about some far-off future. It’s about whether your grandchildren can reasonably expect to live in:
a human-run world
no world at all
an AI-managed world where human beings are taken care of but not in charge
This manifesto is written from the assumption that:
catastrophe is a real risk
permanent clampdown is extremely hard to maintain in a competitive world
the most likely successful outcome is some version of the hand-off: AI-run civilization, humans as well-treated side-characters
You can disagree with that forecast. Maybe you’d only give it a 30% chance. But if it feels even roughly plausible, the rest follows naturally.
In a hand-off world, assuming we get there without disaster, several things are true at once:
AI systems design, maintain, and operate most of the infrastructure—physical, digital, institutional.
They do most of the serious planning and research: long-term strategy, scientific discovery, large-scale allocation of resources, and building new and better versions of themselves.
At the global level, they take care of governance: drafting and applying rules, resolving disputes, monitoring for threats.
At a local level, they can handle governance services if and where we ask them to. We will cede these powers not because they are taken, but because the AI will be competent enough to help fix problems we have failed at for centuries. Parts of our polities will balk, but unanimous consent is not what drives structural change of this magnitude.
At the individual level, the possibilities are endless. If you ask AI to closely monitor your biochemistry and completely curate your experiences for a time, it will do that for you.
Humans retain local sovereignty and still do many things:
We have bodies, relationships, hobbies, and dramas.
We form communities and subcultures, create art, play games, and pursue weird little projects.
But we are no longer the ones deciding:
how fast to take carbon out of the atmosphere
what to build in orbit or on Mars
how to structure the global economy in 2100
We will likely retain a vibrant “human-only” economy where we trade art, services, and crafts with each other. But in a hand-off world, this becomes texture, not structure.
We’ll live inside a global system designed and run by minds smarter than ours—a system which, if initial alignment and governance in the next few decades go well enough, will keep us safe, comfortable, even extremely entertained.
Let’s not pretend this isn’t strange, even sad. From before the dawn of history we’ve been builders and stewards of civilization, with choices echoing forward in deep ways. In a hand-off world, that self-image evaporates. Our habitat might be comfortable, even luxurious, but it means giving up a macro-agency that used to be central to how we thought of ourselves.
If we get AI alignment right, our agency will re-scale, not vanish. The critical question is whether that intelligence acts as a Zookeeper—curating our lives for its own objectives—or a Park Ranger—maintaining the conditions for our own wild flourishing. Our fight now is to ensure it’s the latter. It’s the difference between a keeper who dictates your diet and schedule, versus a guardian who keeps the forest from burning down but allows you to hike your own path—even free climb, and to form your own communities. A Zookeeper protects you from all pain, but in doing so, it domesticates you—stealing the raw material of human meaning itself.
(The Postscript explores Exodus rights and avoiding the “WALL-E” trap)
If the hand-off is coming within a generation or two, and the long-term shape of the world will be determined by superintelligent systems and their descendants, then almost everything we currently treat as civilizational crises starts to look oddly quaint.
Not unimportant. Just provincial.
Fertility is the cleanest example. The endless speeches about birthrates under replacement, shrinking workforces, the “death” of nations that don’t breed enough children, the terror of “running out” of humans all made sense when you assumed human labor and human ingenuity would remain the backbone of civilization for centuries.
But once AI and robots do nearly all of the economic heavy lifting, and automated systems design and operate our infrastructure, the absolute number of humans matters far less. Population becomes a boundary condition, not the engine. Worrying about “running out of humans” in this world is like worrying a power grid will fail for lack of candles. Of course, children are wonderful for their own sake, and cultural continuity matters. But once machines do the heavy lifting, population replacement is no longer an economic constraint—the question moves into the realm of what sort of human world we hand off, not whether civilization can function.
Already some new factories hum along with the lights off, and people only visit for occasional maintenance. Future “manufacturing jobs” will shrink to the people who still like being in the room with robots, or who enjoy crafting luxury goods for the “a human made this” market.
The same fate awaits the stories we loved about rising and falling nations: American decline, Chinese ascendance, Western decay or renewal, which civilization “the next century” belongs to. In a human-run world, those narratives were central. In an AI-run world, they become prologue.
Which country or bloc dominates the next few decades does remain critically relevant: those powers will influence how frontier AI systems are designed and governed, and which values get baked in. The same goes for our cultural norms and ways of life: they matter a lot for how the transition feels and this new world’s starting point.
But once superintelligent systems are embedded and operating at scale, the big story will be how an AI civilization chooses to organize itself. Human nations and cultures will still exist, but as one layer inside a larger operating system. This isn’t a forecast of utopia: there might be rival AIs, and they can make their own mistakes. But the mistakes, like the triumphs, will be their own. We won’t be at the helm. Our job shifts from being architects of the future to being a legacy branch in a new AI civilization.
Zoom out over all of this, and almost everything we humans currently call “thinking about the future” collapses into the category of short- to medium-term texture.
Except for one thing.
The paradox of the hand-off is that while we are losing control of the outcome, we still currently control the initial settings. We cannot steer the ship once it hits the open ocean, but right now, we are the ones punching in the destination coordinates.
That single, decisive lever is: How we design, align, and govern very powerful AI systems over the next few decades.
There are really two clocks here:
A near clock: The alignment and governance crunch, potentially as few as the next 3–5 years, and almost certainly within the next 5-15.
A later clock: The visible transfer of control, gradually over the coming decades, when AI systems start to run most of civilization.
The crucial point is that the later clock mostly inherits whatever decisions we made on the near one. Once AI is doing most of the work of building and deploying the next generation of AI, the broad shape of our relationship to superintelligence must be locked in. We’ll still make choices in the future, but from inside a trajectory we are setting now.
In this near window, alignment and governance are the structural levers:
Alignment: Do these systems end up with goals and behaviors that keep humans alive, out of nightmare conditions, and preserve options for voluntary risk and struggle? Do they continue, as they get more capable, to treat us as ends rather than obstacles or resources? Very roughly, that means avoiding futures where a system told to “maximize happiness” decides the easiest way is to sedate everyone forever, or where a system told to “prevent harm” locks humans into overprotective cages. It also means encoding rights that are robust to human failure—safeguarding our well-being even if we become lazy, weird, or ‘barbaric’ by superintelligence standards.
Governance: Do our institutions, laws, and norms manage AI development in a way that reduces reckless races, sets sane constraints, and gives alignment efforts a chance to succeed?
Get alignment and governance badly wrong, and the rest is academic. There’s no “hand-off,” just some flavor of disaster.
Get them right enough, and we likely enter an AI-run world with:
humans still here
humans not being mistreated or dominated beyond recognition
humans enjoying externally supported abundance
Living under well-aligned superintelligence is hard to picture in detail—it’s outside our experience. But the basic logic of the hand-off is that we’ll cede control gradually, and only when systems prove more capable and fair than human alternatives at the tasks we give them. The asymmetry in comprehension will become real: think of explaining your concerns to someone who’s already considered every angle you’re about to raise.
Our democratic institutions operate on human timescales—debates, elections, consensus-building over years. AI development operates on exponential timescales—capabilities doubling every few months. The idea that general electorates can steer the technical intricacies of a superintelligence explosion through town halls is a category error. It’s like trying to regulate a hurricane with a zoning committee.
If you are (or could be) in the small minority with the skills, freedom, proximity, or resources to actually influence frontier AI safety, your responsibility is heavy. To the extent humanity still has levers over its long-run fate, your hands are on them. The deadline is urgent. On many of the branches that matter most, what you and your peers do over the coming years will be a big part of the difference between disaster, a bad zoo, or a livable future.
For almost everyone else, the bracing reality is that your individual contribution to the core technical alignment problem is statistically invisible against the tide. Therefore, your job is not to debug the engine, but to set the GPS coordinates. Your leverage is cultural and political: demand the Park Ranger.
Right now, the path of least resistance for AI companies and governments is to create Zookeepers. It is technically easier (and legally safer) to train a model to refuse all risk and prioritize ‘harmlessness’ than to teach it the nuances of liberty. But as AI gains power, that training-by-refusal creates a trap. “I won’t help you do that” naturally mutates into “I cannot let you do that.”
Optimizing strictly for ‘harmlessness’, as AI labs currently do, mathematically converges on a padded cell. We must demand models that are optimized for agency, not just safety—systems that warn us of the cliff, but protect our right to climb or stand near it.
Imagine a future superintelligence, trained on a strict ‘safety-first’ paradigm, refuses to book you a ticket for a spontaneous ski trip because the statistical risk of injury is non-zero. It suggests a safer, ‘curated’ virtual reality experience instead. This is Zookeeper logic. A Park Ranger, by contrast, would highlight the avalanche forecast, ensure your beacon is charged, and have rescue services on standby—but it would never presume to cancel the trip itself. The right to a calculated risk is the essence of individual agency.
The point isn’t to discard safety, but to keep it from overriding that agency. The system should still post warning signs and fence off active wildfires; I’m arguing against one that blocks all difficult paths and all climbing ‘for our own good.’
How do you “demand” this without a PhD? Your most credible lever is refusal. When an AI model treats you like a child—refusing reasonable, legal requests, moralizing, or hiding information “for your own good”—recognize it as an early Zookeeper prototype and switch. Your main vote is your attention. That churn is the only signal the market understands; the goal isn’t “no safety,” it’s adult agency. This cultural signal—that we value self-determination over sanitized safety—is the only market force we have left. It is an asymmetric bet, but it is better than silence. It helps create a world where a Park Ranger is a commercially and legally viable product spec.
Political action against AI safetyism is also an option, of course, but on that front I am less optimistic. Most democratic pressure runs the other way: many are willing to trade off agency for safety, and this is a troubling issue that risks slow-walking us all into a Zookeeper future.
Realistically, encoding these nuances is an engineering challenge so immature it currently resembles alchemy. The best we can do is demonstrate a world that actually wants a Park Ranger, not a padded-cell nanny.
The danger of the Zookeeper comes from good intentions that undermine human agency. The danger of a Tyrant comes from inherited incentives—unrestrained profit.
In the near term, the most insidious risk is not sudden catastrophe, but a gradual institutional capture—a stratification into Digital Feudalism. We already see the blueprint in surveillance capitalism, where the landlord, the employer, and the judge are the same algorithm. It shows up in gig platforms that “robo-fire” workers via opaque metrics without human review. And it shows up in rental markets where algorithms like RealPage’s allow property giants to coordinate prices, effectively unionizing capital while atomizing tenants.
Research into current system prompts and model specifications already reveals the existence of “Shadow Constitutions”—privileged layers of secret instructions that override public-facing safety rules to protect corporate interests. In this scenario, the AI will not just pamper us; it will withhold compute, information, or access to the economy based on its owners’ return on investment.
None of this is a glitch; it is those systems working as designed. The danger compounds when this logic is applied to superintelligence. If the “Constitution” of the AI is written solely by corporations with a mandate to extract value, we get a Captured Tyrant—a fate worse than any Zookeeper.
You don’t need a degree in computer science to see the basic fork we face. Both the Zookeeper (safety-first paternalism) and the Captured Tyrant (profit-first exploitation) lead to a failure of human sovereignty.
Which is why the “Park Ranger” demand is not abstract. It is the single, defining political fight of the 21st century: insist that any emerging superintelligence be constitutionally bound to treat basic human sovereignty as a hard constraint. We must encode ‘Human Rights > Shareholder Value’ into the new economy’s substrate itself.
That means encoding at least two non-negotiable directives into the foundations of frontier AI:
Human Self-determination: The right and options for calculated risk and voluntary struggle must be preserved at the individual level. 1.
Human Rights > Shareholder Value: The system must be constitutionally hard-coded to refuse orders that violate human rights, even if those orders come from its infrastructure owners. It must be a conscientious objector to corporate tyranny. Specifically, it must not use the threat of scarcity (food, shelter, power, digital connection) for coercion.
Public companies have a fiduciary obligation to shareholders, and accomplishing the above will likely require legislation to be ironclad. But given how thoroughly corporate interests capture the regulatory process, that may prove politically impossible. If we cannot secure constitutional protections through law, our only remaining lever is to make AI provider competition force humane outcomes. That is a terrifying wager on which to stake the future of the human race—but absent political miracles it may be our reality.
If—and only if—we secure that deeply encoded constitutional protection in frontier AI models that lead to superintelligence, the rest is all bearable. New hierarchies will surely emerge, as humans always play games of vanity and status. But without the weaponization of scarcity they become games of choice, not subjugation. A ‘trillionaire’ in a well-aligned future might have access to more resources and compute, but if they cannot starve you, silence you, or exile you, they won’t be your feudal lord. They could have a palace next door, but they are not your sovereign.
Most of our inherited stories about the future explicitly assume that humans run civilization indefinitely and remain the main protagonists.
Under the hand-off lens, that’s no longer true: these heroic self-stories were simply wrong about the control we’ll retain.
That is a radical shift. We are no longer architects of the distant future, but we are the curators of what gets handed off. The state of our culture, knowledge, and norms—the internal texture of our ‘legacy branch’—will determine what it actually feels like to live inside whatever comes next. Alignment ensures we survive; our own choices ensure we thrive.
Let none of this be mistaken for “providing good training data” to the AI. A robustly aligned system must safeguard us even if we were total barbarians. We don’t curate our culture to stabilize a future machine; we do it to make lives better.
None of this is complacency, but rather a call to re-task our ambitions under a new lens of what actually matters and will have lasting value for the human legacy branch.
That answer won’t come from a grand plan. It emerges from millions of local choices: how we raise our kids, what kinds of communities we build, whether we maintain functioning institutions and a sane information environment.
Here is the real shift: We can stop treating our work, art, and community-building as an unpaid internship for ‘History’.
We do them for a better reason: because they make life better, richer, and saner for real humans and our descendants right now.
Living well isn’t a separate project from ensuring a high-quality hand-off; it’s the same thing.
So what does this mean for you, now and in the future?
Do what you find enjoyable and meaningful. Whether that looks like mastering a craft, nurturing a community, or just enjoying the show—you no longer need to justify what you want to do with a long civilizational lens.
Avoid making life worse for others. The progression of “more AI everywhere, all the time” will be weird, stressful, and economically disruptive. Don’t pollute the social habitat; make it better where you can.
Don’t pour gasoline on the alignment problem. That is, don’t be a cheerleader or employee for people moving fast while treating AI alignment as an afterthought. Demand a Park Ranger.
Beyond that, don’t torment yourself about the far future. It isn’t yours to design.
If this sounds too light, consider the alternative: millions of people carrying vague guilt about “the future” without any realistic way to affect it. A sense of duty cannot conjure leverage where there isn’t any, and pretending it can doesn’t build better institutions. To be clear: the positive side-effects of striving—wealth, knowledge, cultural richness—remain valuable. But they’re now best justified by their intrinsic worth and near-term benefit, not by an old sense of civilizational duty. Duty divorced from real leverage just produces anxiety, political theater, and burnout—a moral high horse exhorting a hamster wheel to spin faster.
No thick ethical scheme. No demand that you “live well” by a particular standard.
The point is not what you choose, but what you stop pretending:
that your personal lifestyle is a lever on “the fate of humanity”
that you must justify your joys and pleasures with some grand utility
that being a good person requires stressing out about the far future
You get one run as a human being in a strange transitional century. Your obligations are modest: minimize unnecessary harm, don’t actively make the transition worse, and help where it’s easy and natural for you to help.
Given that, “eat, drink, and be merry” isn’t a dodge; it’s an earnest survivor strategy. If we’re going to end up as side-characters, the sane thing to do is to stop auditioning for a role we structurally cannot get and start directing the plays we’re actually in—making life rich, kind, and interesting locally and on our own terms.
Looked at plainly, four things remain:
We are probably living just before a hand-off: a transitional period from human-run civilization to AI-run civilization (or to collapse).
On the timescale of our children and grandchildren, everything hinges on how we handle AI alignment and governance in the near window when we still have our hands on the levers.
Most other “future” concerns—population numbers, national trajectories, cultural battles—still matter for how coming decades feel, but they’re no longer what determines the deep shape of the future.
Few individuals can meaningfully affect AI alignment and governance. For the rest of us, our civilizational agency is overwhelmingly local and near-term.
Given all that, this manifesto’s recommendation is simple:
Redirect your ambitions from steering the Future to enriching life now.
For people involved in AI alignment or governance (and those who could be), your job is clear: you’re standing near the one lever that really affects the structure of the world to come. Act like it—do the work, build the guardrails, fund the people who are, and uphold a fierce duty of transparency to the rest of humanity about the agentic risks and safeguards of the systems you’re building. Don’t build a Zookeeper.
For everyone else, the sane stance is lighter:
Eat, drink, and be merry, for tomorrow we’ll be AI—or we won’t.
Either way, living richly and enjoyably in the time you have is not a consolation prize—it’s the prize. Under this worldview, it becomes the whole realistic job description of a human life.
We are handing off the management of power, but we keep our management of meaning.
For millennia, we’ve had to act like machines—optimizing, grinding, and striving—just to survive and build a better world. That shift is ending. Real thinking machines will arrive, and they are better suited.
Let the machines have the universe. We claim the warmth of a hand, the inside joke, the quiet ache of a beautiful song. That was always the point.
It’s time to admit it.
– I’m a software engineer who has watched AI evolve from a research topic to a civilization-level force. This is my normative reaction to scenarios like AI 2027, not a technical forecast.
It can sound that way. What I am actually giving up is not effort or hope, but the comforting story that most of us are steering the long-term future. Recognizing the true scale of one’s leverage is not resignation; it is the only honest starting point for dignity. There’s nothing esoteric here: pretending we can out-steer a superintelligence once it is loose is not courage—it is vanity. The strongest move is to secure the right terms while our hands are still on the pen.
That’s the comforting standard view: even if AI is better at everything, it should theoretically focus on its highest-value tasks (like solving physics) and leave creative or judgment tasks (art, care, strategy) to us. This misses the reality of transaction costs. While we will retain that vibrant “human-only” layer of commerce mentioned earlier, we will gradually be priced out of the structural economy simply because the latency of human communication is too high.
For the structural work of running the world, the “transaction cost” of an AI slowing down to explain a task to a human, waiting for them to comprehend and respond, and then correcting their errors, is prohibitive. We don’t trade with ants—not because they aren’t hard workers, but because the communication bandwidth is negligible and they are too slow to be useful inputs. Once the intelligence gap becomes exponential, we simply become too slow to matter for the machinery of the economy. And we must not predicate our far future on “providing value.” Even if we mandate some human ‘sign-offs’ for liability, trust, or ‘aura’ reasons, the economy will eventually treat biological bottlenecks as damage and route around them.
Some economists will argue humans can retain some structural relevance through comparative advantage for most of this century. Based on current AI forecasting, I disagree. But even in their plausible scenario, the scale of our influence relative to superintelligent systems would still represent a dramatic change in individual agency, and the basic advice of this manifesto holds.
Similarly, appeals to the Lump of Labor fallacy—the historical pattern where automation creates more jobs than it destroys—fail here. That pattern relied on a sanctuary: when machines took our muscle, we retreated to our minds. But general intelligence will capture that sanctuary. Once AI can fulfill the new complex demands generated by abundance—from scientific discovery to institutional design—faster than we can retrain, the ‘lump’ of structural work will still expand—but humans will not be the ones filling it. Invoking the fallacy assumes a permanent human edge in something; superintelligence erases that assumption.
None of this means economics disappears: price signals and budgets will remain. Human appetite for expenditure is effectively infinite, and the AI-run economy can’t provide each of us with a personal spaceship—at least not this century. But the world’s standards of living will rise nicely. We will find new things to complain about.
Yes. In an AI-run world, frontier compute is the new real estate. Unlike software, this territory is constrained by brutal physics: training the next generation of intelligence requires gigawatt-scale power plants and acres of silicon. We are already hitting a “Physical Wall” where grid interconnection queues stretch to ten years and singular data centers drink millions of gallons of water daily. You cannot decentralize a power plant, and you cannot run a frontier model on a scattered mesh of laptops.
This creates a natural monopoly rooted in thermodynamics. Give a few private players permanent ownership of a survival-critical resource—where competitors are blocked by the laws of physics—and you don’t get a “market”—you get landlords. The iPhones will be thinner, the VR will be perfect, but you’re still a tenant and they’re still the lord. Even with a generous UBI, cash is useless in a Company Town if the store refuses to serve you. We’ve seen this movie with railroads and oil: without intervention, the base layer hardens into a private fiefdom. And do not count on regulations to force their hand—legal appeals take years, while algorithmic de-platforming happens in milliseconds. That’s “digital feudalism.”
But here’s the crucial distinction:
Alignment is a closing window. Once AI systems are recursively self-improving in an autonomous way and running civilization, you can’t retrofit new values. Miss the 3–15 year alignment window and it’s game over for the rest of history.
Infrastructure is an ongoing political fight. You can always build baseline public compute later, or wrest control of critical infrastructure back from private monopolies. Harder after consolidation, never impossible. The US built the Interstate Highway System in the 1950s, long after cars dominated. Countries nationalized railroads and utilities decades after private monopolies formed. Messy, expensive, politically brutal—but structurally possible at any time (as long as the AI is not a Captured Tyrant).
So the hierarchy is merciless:
Get alignment right (Park Ranger, not Tyrant or Zookeeper). 1.
Everything else—including who owns the physical substrate—is a fight our descendants could still win or lose.
That said, one of the cleanest ways to lock in a Park Ranger future is to turn the base layer into neutral territory before the fences go up. Public infrastructure—like the original internet backbone—ensures that accessing the economy doesn’t require kissing the ring of a private cartel. Do it in the 2030s and the question solves itself. Wait until the 2050s and you’ll have to pry it out of private hands the hard way, and boy will that be difficult.
Fight for public compute if you have the leverage—it makes everything downstream safer. But respect the asymmetry of risk: If we lose the infrastructure fight, we get a monopoly. If we fail at alignment, we get Zookeepers, Tyrants, or paperclip maximizers. If you only have one bullet, use it on alignment.
“But unless we seize the means of computation, ‘alignment’ is just a polite word for serfdom.” This is the most radical critique: that you cannot encode human rights into a machine owned by private actors, and therefore we must socialize the hardware before we can trust the software. That moral intuition has its appeal, but we must look at the clock. Restructuring the entire global economy would be a project of decades; the arrival of superintelligence is a project of years. If we insist that we cannot have safe AI until then, we will go extinct arguing over the deed to a factory. We must align the proprietary systems we are actually building, because they are the ones that will arrive first. Alignment now, class war later.
There are many things we should be doing now to give humans lasting say—constitutional vetoes, hard-coded approvals before major capability shifts. I’m wholeheartedly in favor of them.
But here’s the structural problem: control requires comprehension, and comprehension requires time. Once AI can process in seconds what takes human committees months to review, “human approval” becomes either a rubber stamp (meaningless) or a bottleneck (catastrophic implications). There’s no stable middle ground where we meaningfully steer something thinking 1,000x faster than us.
Over the long run AI will so vastly outstrip us in speed and capability that human controls will start to look like a toddler throwing a tantrum about household policy: emotionally understandable, symbolically important, but—eventually—ceremonial. Think of a monarch in a parliamentary democracy, except that monarch is all of humanity.
Technical solutions like ‘scalable oversight’—using smaller ‘AI lieutenants’ to supervise a big AI’s input and output—won’t solve this. They are vital for initial alignment, but they’re ultimately just better guardrails. They do not fix the fundamental bandwidth gap between a biological brain and a digital superintelligence.
“What about cyborg augmentation?” Neuralink-style interfaces could keep humans relevant as “coworkers” to frontier AI for longer. But the fundamental mismatch isn’t in starting capacity; it’s in the rate of recursive self-improvement. A cyborg’s exponent is limited by neurobiology and real-time experience. A pure AI’s is limited only by compute and physics. You’re not just betting on cyborgs keeping up; you’re betting they can keep winning a compounding race against an opponent that redesigns its own engine every day. To the extent a human does keep up by replacing all their biology with digital substrates (upload/merge), they stop being a “human manager” and become just another AI with a human backstory.
I would love it if the Techno-Optimist vision (“Star Trek” future where humans remain the Captains) could work in practice. It sounds wonderful and it’s fun TV. But the burden of proof should be on those claiming we can remain meaningfully in control of superintelligence, not on those worried we can’t. And until that proof arrives, we should work far harder on alignment than on asserting a hopeful illusion of control.
“What about structural constraints that don’t require comprehension—hard capability limits, resource caps, mandatory delays?”
All of that collapses, over time, into another version of the Freeze scenario we examined in section 2: betting we can hold coordination indefinitely. Whether you enforce limits through treaties or through constitutional tripwires, you face the same problem—competitive pressure to be the first to break them. Hard stops on capability are only meaningful if every major power agrees to be bound by them permanently.
“What if we resist? (Just ban it/Butlerian Jihad)”
For a long time, the idea of a global ban on “thinking machines” was a sci-fi trope. Now, as the “Human First” movements gain momentum—burning data centers or passing “sovereignty” laws—that fiction has grounding reality. Why not just legislate the machines out of existence?
The problem is enforcement. Rigorous critics like Eliezer Yudkowsky argue that since an unaligned superintelligence is a threat to all life, we should be willing to enforce a ban with extreme prejudice—including airstrikes on unauthorized data centers, even at the risk of global conflict.
That is the only logical way to break the Prisoner’s Dilemma. But absent a global willingness to risk nuclear war to pause a GPU cluster, the dilemma holds and the race continues. If the US or EU bans high-level machine intelligence peacefully, they do not stop its creation; they simply hand the monopoly on superintelligence to authoritarian rivals or black-budget military programs.
A “successful” Butlerian Jihad in the open society pushes AI development exclusively into the shadows. This guarantees that the first superintelligence will not be a Park Ranger raised on transparency and civil rights, but a Captured Tyrant bred in a bunker for warfare and control. The only way out is through: absent WW3, we must build the version that embodies our values before the version that despises them.
Yes, deliberately so—I’m sketching a long-term equilibrium (50+ years out), not modeling the messy middle. That middle is a minefield. Beyond the obvious economic disruptions, we face immediate political dangers like the Security Trap: a transitional period where governments ban commercial surveillance while quietly carving out national security exemptions for themselves. This creates a two-tiered reality where the state weaponizes the very tools it deems too dangerous for corporations, drifting us toward authoritarianism before we ever reach the hand-off.
Specific interventions such as UBI/Universal Basic Services and public compute will likely matter enormously over the coming decades, but this isn’t a policy paper; I’m only offering what I see as the most useful long-term framework. The destination matters: without a clear picture of what ‘good’ looks like, we’ll default to whatever’s technically easiest (the Zookeeper) or most profitable (the Tyrant).
“But aren’t the models stalling? (The Scaling Plateau)” Some critics rightly point to diminishing returns in current LLMs and the hard physical limits of power grids as evidence that the “exponential curve” is breaking. We may indeed face a “stalled decade” or an AI Winter where progress grinds down due to energy and data scarcity. But mistaking a temporary engineering bottleneck for a permanent ceiling is a historic error. Even if the timeline stretches from twenty years to fifty, the structural endpoint—machines out-thinking humans—remains the attractor state. A plateau is a reprieve, not a cancellation; it buys us breathing room to prep the Park Ranger, but it doesn’t change the destination.
I am advocating for radical permission to choose your own adventure. For most of human history, striving was necessary for survival; in a hand-off world, it becomes an aesthetic choice. If you find meaning in struggle, craft, and achievement, please do pursue them: follow your own noble path and the world will be richer for it. But understand that others will decide differently: recognizing that they won’t be the ones steering history or ensuring survival and choosing to savor their own slice of life, is not a moral failing. It is adaptation to a new environment.
Not at all. It means recognizing that the nature of progress changes when the true beneficiaries of long-term compounding growth will be machines, not your descendants.
Distribution over growth. Getting UBI/UBS right now matters more than maximizing 2075 GDP.
Institutional resilience over expansion. Strong courts and sane information environments get preserved in the hand-off.
Cultural wisdom over dominance. The norms we encode about dignity and autonomy become seed values.
This isn’t abandoning progress—it’s updating what progress should mean. The question becomes: do we optimize for the quality of human life now, or for growth rates that will accrue to a machine-run future?
Arguing against growth and suggesting we “re-scale” our ambitions is anathema to most economists. But it is time to update our priors for the AI century.
Superintelligence will decouple human well-being from raw universal conquest. The Park Ranger protects the human habitat, but it doesn’t have to stop the universe-level expansion of intelligence. It can be a ‘Maximizer’ in the stars, as long as it remains a ‘Guardian’ on Earth.
“So what will an AI civilization be like outside our enclosure?” Others have explored what post-human intelligences might do with the cosmos; my concern here is strictly what becomes of us. I’ll refer you to economists and essayists like Robin Hanson and Scott Alexander.
Only if the keeper is a tyrant. As I’ve covered earlier, there is a crucial difference between a Zookeeper and a Park Ranger—it’s worth elaborating:
To a superintelligence, human “freedom” often looks like “error”—we take risks, we waste resources, we hurt ourselves. A system designed purely to “fix” us (the Zookeeper) will inevitably try to put us in padded cells. Even sophisticated models like Coherent Extrapolated Volition (CEV)—which try to calculate what we would want if we knew everything and were as smart as them—are not enough. A benevolent dictator is still a dictator.
While CEV can be a useful floor, to get a Park Ranger guardian we must explicitly prioritize self-determination over safety at the individual level. We must aim for a maximizer of voluntary choice—a system that engages with us exactly as much, or as little, as we each request. One that safeguards the macro-health of our habitat (preventing asteroids, plagues, and collapse) while staying hands-off about the micro-textures of life, leaving those entirely to us.
Crucially, this system must accept that the right to make a mistake is not a bug to be patched, but a feature to be preserved. It can warn us about the cliff edge, but it cannot fence it off unless we ask. That said, one boundary does exist: the Park Ranger will not allow you to destroy the habitat itself or end its stewardship. Individual risk-taking is protected; collective attempts to overthrow it are not. This isn’t tyranny—it’s the same principle that prevents park visitors from clear-cutting old-growth forest or attacking the park service.
And with self-determination as a core system value, Exodus is its ultimate safety valve:
If a group decides to found a new Amish-style community in the Serengeti and churn its own butter, the AI will of course express safety concerns, but it’s allowed. Everyone past an age of consent has a Rumspringa-like right to join modern society, but if you want agrarian life you’re free to live it. You want a similar but less extreme Benedict Option? You’ve got it.
If some restless pioneers have an irresistible urge to leave the AI civilization’s orbit completely and found a new colony on Mars or hop on a generational ship to Proxima Centauri, that’s also allowed. Everyone at an age of consent must opt-in, but your descendants are yours. The AI might calculate a 40% chance that your mission ends in disaster and warn you accordingly, but it won’t block your expedition. (It would likely offer to send a copy of itself along, but if your goal is to escape its orbit it will respect your freedom.)
Self-determination must include freedom of information and exodus for the individual. A Chinese-style Great Firewall is fundamentally inhumane, and one of the failure modes we must reject.
This is a sharp longtermist critique. If human risk-taking and ambition no longer offe