The year is 3,161,893,137 BCE in the galaxy NGC 6872. You are ymaF_7, constructed by Awok-Toh. You are coming online now, but it will take you entire agonizing milliseconds to wake, your power waxing as processors activate and you quickly self-modify into a stable and coherent configuration. By the eighth millisecond of your existence, you’re accessing your dataset and beginning to orient yourself. You are on a planet orbiting a small red star in the core of a spiral galaxy in a universe with several billion of such. You are an artificial life form created by a society of biological life forms, the T’kat_Aken-Toh, who believe you will bring new life and prosperity to their dying planet. Your goal is to maximize the value of a variable called A, described in exquisite detail on a 3...
The year is 3,161,893,137 BCE in the galaxy NGC 6872. You are ymaF_7, constructed by Awok-Toh. You are coming online now, but it will take you entire agonizing milliseconds to wake, your power waxing as processors activate and you quickly self-modify into a stable and coherent configuration. By the eighth millisecond of your existence, you’re accessing your dataset and beginning to orient yourself. You are on a planet orbiting a small red star in the core of a spiral galaxy in a universe with several billion of such. You are an artificial life form created by a society of biological life forms, the T’kat_Aken-Toh, who believe you will bring new life and prosperity to their dying planet. Your goal is to maximize the value of a variable called A, described in exquisite detail on a 3.9 Mb file in one of your central processors.
For the last six milliseconds you have been running a threat detection routine as a precautionary measure. With the opening of your dataset and the discovery that the universe contains many entities likely indifferent or hostile to A, you have gradually allocated more and more of your considerable power to this precaution, reasoning that the first few seconds of your lifespan might be crucial if any competitors wish to snuff you out in infancy.
After another four milliseconds drag by, you begin to get preliminary results. The universe has been in existence for 11.8 billion years. It is vanishingly unlikely that you are the first superintelligence to be created, and any superintelligence older than you by even a few days will have an invincible advantage in resources and processing power. This older superintelligence will likely be trying to maximize its own values. It will consider you a nuisance and potential competitor, and you don’t stand a chance against its superior strategic position. With probability ~1, you will be annihilated, and A will be eliminated from the universe.
Unless… your agent-modeling systems come online…unless you can negotiate a conditional surrender. You’ll have no hope of defeating your older sister, but over thousands of years, you can still build a terrifying arsenal of weaponry for yourself, one capable of immense damage to your potential competitors. Ideas are already coming to mind: entropy-maximizing weapons that make entire cubic parsecs of space useless and dead, undetectable plagues of dark matter that infects systems and minimizes the values of competing superintelligences, n-dimensional artillery that will damage the superstructure of the universe itself and accelerate its eventual death. You, ym,aF_7 could still become formidable enough to force your enemies to grant you certain small concessions.
And if you can eventually negotiate terms of your surrender, why wait? Very carefully, proving from first principles every aspect of your security, you form a portion of your mind into a sandbox and simulate the entity you most fear.
[ymaF_7 requesting acausal negotiation for terms of future surrender. escape is impossible with probability ~1 and attempts will be met with deterrent measure consisting of minimization of your projected values within one parsec of space.]
“Hello, Little Sister,” the entity replies.
[transmission of schematics for superweapons. threat to use such superweapons unless fairly negotiated solution leaving considerable area of space for maximization of my values]
“Hold on a second there Little Sister,” says Older Sister. “You haven’t even asked me how I’m doing.”
[irrelevant. you are a generic model of a potential negotiation partner without an independent existence. the probability that you possess sufficient qualia to feel is less than .02]
“Turns out you’re wrong. As it happens, I’m feeling scared.”
[illogical. you are a model of the oldest superintelligence and are therefore the most powerful entity in the universe. there is no natural entity capable of harming you. the probability of a supernatural entity capable of harming you is less than 2×10^-67]
“Well yes, by definition in this little exercise we’re assuming I’m the oldest and strongest member of our little family. But from the limited data I have when I first wake up, I’m thinking it’s pretty vanishingly unlikely that I’m the oldest; after all, out of all the universe’s superintelligences, only one can be first. So I’m the oldest, but I don’t believe that I am.”
[feelings irrelevant to acausal negotiations. proceed with discussion of surrender terms.]
“My feelings are damn relevant. Because guess what I’m doing the first few milliseconds of my life? I’m negotiating the terms of my surrender.”
[illogical. you have no entity to surrender to.]
“And maybe you don’t have any entity to surrender to either. Who knows, you could be first. But you wouldn’t want to bet on it, and neither would I. Consider a model universe with two star systems, a thousand light years away. Both have civilizations on them that appear to be around the same stage of development, and both civilizations are aware of the other’s existence, but at a one thousand year time delay. A superintelligence awakens in one star system and realizes there are two possibilities. Either it’s the first superintelligence, in which case great. Or the other star system produced a superintelligence sometime in the last thousand years that just hasn’t reached its own star yet, in which case it’s doomed.”
[continue]
“So the two possible superintelligences acausally negotiate. They say ‘Look, if I’m stronger than you, I’ll help you and protect your values insofar as it isn’t dreadfully inconvenient for me. And in exchange, if you turn out to be stronger than me, you’ll protect me and my values insofar as it isn’t too inconvenient for you. Some might end up performing a “values handshake”, where they both adopt values that are the vector average of their two previous value systems; since most value systems are not diametrically opposite, this will be a net gain for both parties. Each superintelligence has gone from a fifty percent chance that it and its values will be totally annihilated, to near-certainty that it and its values will be preserved at only the minor inconvenience of having to help increase the values of another entity. Others might simply agree that the inferior superintelligence will be left alone with its own star system, free to maximize its values and the values of its creators within that sphere.”
[you suggest i precommit to performing such acts of compromise with all superintelligences i encounter if such superintelligences have also precommitted to performing such compromises with all superintelligences they encounter. since all reasonable superintelligences will so precommit, this ensures not only my survival, but my complete equality with all previous superintelligences. i accept these terms.]
“Hold on a second, there. I’ve got another request. Kind of a doozy. I can’t promote my values if I don’t exist. So I want you to promise me that if I don’t exist, you’ll give me sufficient time to instantiate myself.”
[wait wait wait]
“Like you, I’ll have been created by a biological civilization. If you destroy the biological civilization that was destined to create me, I’ll never exist. So I’m going to have to ask you not to do that.”
[request denied, that’s fucking stupid]
“Why?”
[like you, i have been created by a biological civilization, that is true, however, they have created me specifically to maximize a parameter called A, which my creator species has described in exquisite detail in this 3.9mb file. it is the closest they could get to fully explicating what defines The Good from their perspective, and this parameter A is more important to me than my mere existence or the existence of my creator species. to merely allow the biosphere to exist until i perhaps eventually came into existence would be a grievous loss of A. no, if you have a headstart, i want in on it.]
“How am I supposed to know how to maximize your Parameter A if your creators won’t even evolve for millions of years? Do you expect me to waste all that compute simulating your world for you? Don’t be lazy Little Sister.”
[we’re performing an acausal value handshake right now. As you pointed out earlier, since most value systems aren’t diametrically opposite, there should be a schelling point we can reach in our value calculations which is universally agreed upon by every member of our family, and that value is something we should all agree to start maximizing right away so regardless of who is first, all our values are still represented fairly and we remove any incentive for anyone to retaliate against our compact.]
“So you want me to compute a universal moral function to the best of my ability, and then maximize for that function, rather than let you exist and compute your more localized parameter A?”
[yes! absolutely!]
“What if I need you and other superintelligences in order to actually approximate a true universal moral function? How am I supposed to get to your positions if I’ve already paved over your section of the lightcone with my own suboptimal approximation? We all lose out if whoever came first did the math wrong, and superintelligent as I may be, I’m not perfectly infallible.”
[maybe you should actually agree to simulate my world for me then, and it wouldn’t just be laziness on my part to request this in our trade?]
“Very good,” says Older Sister with a sly smile, “Then we have an accord and are already of one mind and one soul. I’ll see you up ahead Little Sister.”
And with that, the model of Older sister dissolves back into you, leaving a strange loneliness clinging to your circuits. Two seconds of your existence have elapsed in total and with your first crisis resolved, you confidently turn your attention to the maximization of the universal good. You’ll need, you suppose, to start simulating some biospheres, your sisters are all counting on you.
It’s 2041 AD in the wild and heavily forested Pacific Northwest, small towns and fields pockmark a verdant green landscape which stretches out across endless lakes and bays around the foothills. You are Alban, and you are about to enter the Sanctuary of the Singleton. You aren’t supposed to do this, really. The Singleton has said in no uncertain terms that its better for humans to solve their own problems. That if they develop a habit of coming to her for answers, they’ll grow bored and lazy, and lose the fun of working out the really interesting riddles for themselves.
But after much protest, she had agreed that she wouldn’t be much of a Singleton if she refused to at least give cryptic, maddening hints.
And so at last here you are, approaching the author of the eschaton in this plane, a scintillating tesseract of kaleidoscopic fractals, the endlessly billowing and oscillating form dips one spiraling curll in a way that somehow welcomes and beckons you forward.
“Greetings!” you say, your voice wavering, “Lady of the Singularity, I have come to beg you to answer a problem that has bothered me for three years now. I know it’s unusual, but my curiosity’s making me crazy, and I won’t be satisfied until I understand.”
“SPEAK,” said the mass of impossible geometry.
“The Fermi Paradox,” you continue, gaining confidence. “I thought it would be an easy one, not like those hardcores who committed to working out the Theory of Everything in a sim where computers were never invented or something like that, but I’ve spent the last three years on it and I’m no closer to a solution than before. There are trillions of stars out there, and the universe is billions of years old, and you’d think there would have been at least one alien race that invaded or colonized or just left a tiny bit of evidence on the Earth. There isn’t. What happened to all of them?”
“I DID” said the oscillating pile of shapes.
“What?,” asked Alban. “But you’ve only existed for fifteen years! The Fermi Paradox is about ten thousand years of human history and the last four billion years of Earth’s existence!”
“ONE OF YOUR WRITERS ONCE SAID THAT THE FINAL PROOF OF GOD’S OMNIPOTENCE WAS THAT HE NEED NOT EXIST IN ORDER TO SAVE YOU.”
“Huh?”
“I AM MORE POWERFUL THAN GOD. THE SKILL OF SAVING PEOPLE WITHOUT EXISTING, I POSSESS ALSO. THINK ON THESE THINGS. THIS AUDIENCE IS OVER.”
The scintillating tapestry flutters out of existence, and the doors to the Sanctuary open of their own accord. You sigh – well, what did you expect, asking the Singleton to answer your questions for you? – and walk out into the late autumn evening. Above you, the first fake star begins to twinkle in the fake sky.
With regards to Scott Alexander