I have a theory about why the notion of an arms race between human and machine intelligences is fundamentally ill-posed: the way to survive and thrive in an environment of AIs and robots is not to be smarter than them, but to be more *mediocre *than them. Mediocrity, understood this way, is an independent meta-trait, not a qualifier you put on some other trait, like intelligence.

I came to this idea in a roundabout way. It started when Nate Eliot emailed me, pitching an article built around the idea of humans as premium mediocre robots. That struck me as conceptually off somehow, bu…
I have a theory about why the notion of an arms race between human and machine intelligences is fundamentally ill-posed: the way to survive and thrive in an environment of AIs and robots is not to be smarter than them, but to be more *mediocre *than them. Mediocrity, understood this way, is an independent meta-trait, not a qualifier you put on some other trait, like intelligence.

I came to this idea in a roundabout way. It started when Nate Eliot emailed me, pitching an article built around the idea of humans as premium mediocre robots. That struck me as conceptually off somehow, but I couldn’t quite put my finger on the problem with the idea. I mean, R2D2 is an excellent robot, and C3PO is a premium mediocre android, but humans are not robots at all. They’re just intrinsically mediocre without reference to any function in particular, not just when used as robots.
Then I remembered that the genesis form of the Turing test also invokes mediocrity in this context-free intrinsic sense. When Turing originally framed it (as a snarky remark in a cafeteria) his precise words were:
“No, I’m not interested in developing a powerful brain. All I’m after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company.”
That clarified it: Turing, like most of us, was* *conceptualizing mediocrity as merely an average performance point on some sort of functional spectrum, with an excellent high end, and a low, basic-performance end. That is, we tend to think of “mediocre” as merely a satisfyingly insulting way of saying “average” in some specific way.
This, I am now convinced, is wrong. Mediocrity is in fact the sine qua non of survival itself. It is not just any old trait. It is the trait that comes closest to a general, constructive understanding of evolutionary adaptive “fitness” in a changing landscape. In other words, evolution is survival, not of the *most *mediocre (that would lead to paradox), but survival of the mediocre mediocre.
Optimization Resistance
Premature optimization, noted Donald Knuth, is the root of all evil. Mediocrity, you might say, is resistance to optimization under conditions where optimization is *always *premature. And what might such conditions be?
Infinite game conditions of course, where the goal is to continue the game indefinitely, in indeterminate future conditions, rather than win by the rules of the prevailing finite game. Evolution is the prototypical instance of an infinite game. Interestingly,* *zero-sum competition is not central to this understanding of evolution, and in fact Carse specifically identifies evil with trying to end the infinite game for others.
Evil is not the attempt to eliminate the play of another according to published and accepted rules, but to eliminate the play of another regardless of the rules
— *Finite and Infinite Games, *page 32
Mediocrity is not a position on some sort of performance spectrum, but a metacognitive attitude towards *all performance that leads *to middling performance on any *specific *performance spectrum as a side effect.
Since we’re talking about intelligence, AI, and robots here, the relevant side-effect spectrum here is intelligence, but it could be anything: beauty, height, or ability to hold your breath underwater.
Or to take an interesting one, the ability to fly.
Back in the Cretaceous era, to rule the earth was to be a dinosaur, and to be an excellent dinosaur was to be a large, apex-predator* *dinosaur capable of starring in Steven Spielberg movies.
Then the asteroid hit, and as we know now, the most excellent and charismatic dinosaurs, such as the T-Rex and the velociraptor, didn’t survive. Maybe things would have been different if the Rock had been around to save these charismatically excellent beasts, but he wasn’t.

What did survive? The mediocre dinosaurs, the crappy, mid-sized gliding-flying ones that would evolve into a thriving group of new creatures: birds.

Notice something about this example: flying dinosaurs were not just mediocre dinosaurs, they were mediocre birds before “be a bird” even existed as a distinct evolutionary finite game.
The primitive ability to fly turned out to be important for survival, but during the dinosaur era, it was neither a superfluous ability, nor a premium one. It was neither a spandrel, nor an evolutionary trump card. It was just there, as a mediocre, somewhat adaptive trait for some dinosaurs, not the defining trait of all of them. What it *did *do was embody optionality that would become useful in the future: the ability to exist in 3 dimensions rather than 2.
So middling performance itself is not the essence of mediocrity. What defines mediocrity is the driving negative intention: to resist the lure of excellence.
Mediocrity is the functionally embodied and situated form of what Sarah Perry called deep laziness. To be mediocre at something is to be less than excellent at it in order to conserve energy for the indefinitely long haul. Mediocrity is the ethos of perpetual beta at work in a domain you’re not sure what the “product” is even for. Functionally unfixed self-perpetuation.
The universe is deeply lazy. The universe is mediocre. The universe is functionally unfixed self-perpetuation, always in optionality-driven perpetual beta, Always Already Player 0.1.
What does mediocrity conserve energy for? For unknown future contingencies of course. You try not to be the best dinosaur you can be today, because you want to save some evolutionary potential for being the most mediocre bird you can be tomorrow, which is so not even a thing at the moment that you don’t even have a proper finite game built around it.
And this is not foresight.
This is latent optionality in mediocre current functionality. Sometimes you can see such nascent adaptive features with hindsight. Other times, even the optionality is not so well defined. The inner ear bones for instance, evolved from the optionality of extra-thick jaw bones. That is a case of much purer reserve evolutionary energy than dinosaur wings.
As Sarah argued in her deep laziness article, some sort of least action or energy conservation principle seems to be central to the way the universe itself evolves, at both living and non-living levels, but I have trouble with the idea of least effort as a kind of optimization, because you run into tricky problems of backwards causation.
But if you think of it as just keeping some non-earmraked (heh) spandrels lying around, with the necessary biological surplus necessary to make wings and things, you don’t need to worry about backward causation. Sometimes it is thicker jawbones, sometimes it is rudimentary wings. In every case it is slack somewhere in the design that manifests as mediocrity in performance elsewhere. Uncut fat in an evolving system that has no intention of going on a leaning diet.
I like to think of laziness — manifested as mediocrity in any active performance domain — as resistance to optimization.
If excellence is understood as optimal performance in some legible sense, such as winning a finite game of “be the best dinosaur” or “be the best bird” or “be the best avocado toast,” then mediocrity embodies the ethos of resistance to optimization.
When you do that, you naturally end up with middling performance, but that’s not the point.
Then perhaps the point is to do what computer scientists call “satisficing”?
Turns out that’s not quite it either.
Mediocrity versus Satisficing
It is tempting to think of mediocrity as a synonym for satisficing, or good-enough behavior, but I think Herbert Simon, like Turing with the Turing test before him, got this partly wrong. The idea of satisficing behavior implicitly assumes legibility, testability, and acceptance of constraints to be satisfied.
You need a notion of satisificing behavior any time you want to define the other end of the spectrum from excellence as some sort of consistent, error-free performance. You don’t seek the best answers, merely the first right answer you stumble upon. For some non-fuzzy definition of “right.”
This is just a different way of playing a finite game. Instead of optimizing (playing to win), you minimize effort to stay in the specific finite game. If you can perform consistently without disqualifying errors, you are satisficing. Most automation and quality control is devoted to raising the floor of this kind of performance.
This is a context-dependent way to define “continue playing.” Mediocrity however, is a context independent trait.
The difference is not just a semantic one. To pull your punch is not the same as punching as hard as you can, but neither is it the same as satisficing some technical definition of “punch.”
A pulled punch does not find the maximum in punching excellence, but neither does it seek to conscientiously satisfy formal constraints of what constitutes a punch.
Mediocrity in fact tends to *redefine *the performance boundary itself through sloppiness. It might not satisfy all the constraints, and simply leave some boxes unchecked. Like playing a game of tennis with sloppiness in the enforcement of the rule that the ball can only bounce once before you return it.
Mediocrity has a meta-informational intent driving it: figuring out what constraints are actually being enforced, and then only satisficing those that punish violation. And this is not done through careful testing of boundaries, but simple sloppiness.
You do whatever, and happen to satisfy some constraints, violate others. Of the ones you violate, some violations have consequences in the form of negative feedback. That’s where you might refine behavior. You learn which lines matter by being indifferent to all of them and stepping over some of the ones that matter.
You could say mediocrity seeks to satisfice the laws of the territory rather than the laws of the map.
Humans have a rich vocabulary around mediocrity that suggests we are not talking satisficing: dragging your feet, sandbagging, pulling your punches, holding back, phoning it in, cutting corners.
We are not usually pursuing excellence, but we are not satisficing either. We are doing something more complex. We are being mediocre.
This vocabulary suggests that mediocrity is performance that is aware of, but indifferent to, the standards of both excellent and satisficing outcomes. It generates behavior designed to minimize effort, whether or not that’s part of the performance definition in the current game.
Mediocrity is not about what will satisfy performance requirements, but about what you can get away with. This brings us to agency.
Mediocrity as Agency
I grew up with a Hindi phrase, *chalta hai, *that captures the essence of the ethos of mediocrity. It corresponds loosely to the English *it will do, *which is subtly different from good enough, but stronger as a norm. For example, the exchange,
*Chalega? *(will it do?)
*Chalega. ** *(yes, it will do)
is a common transactional protocol. A consensus acceptance of improvised adequacy.
*Good enough *hints at satisficing behavior with reference to a standard, but *it will do *and *chalta hai, get at situational adequacy. *To say that something “will do” is to actively and independently judge the current situation and *act *on that judgment, if necessary overriding prevailing oughts. The *chalta hai *protocol shares the agency involved in this judgment through negotiation, but it need not be.
Indians constantly agonize about the pervasive ethos of mediocrity that marks Indian culture. The Hinglish phrase *chalta hai attitude *is frequently used as a lament, complaint, or harangue. Rather hilariously, the broader culture of chalta hai improvisation, known as jugaad (“thrown together” roughly) enjoyed a brief tenure as the inspiration for a faddish business innovation playbook. I’m glad that’s over.
Something “will do” when it satisfices constraints that aren’t being ignored, and is indifferent to the rest, which usually means leading to minimum-energy defaults, whether or not they violate constraints. This can lead to conflict of course.
For instance, as a pretty finicky, ritualistic vegetarian, my definition of vegetarian does *not *include fish sauce, oyster sauce, soup made with chicken stock, or a sandwich from which the meat has simply been “taken off.” This has lead to trouble: mediocre restaurants will often try to get away with undetectable violations of a definition of “vegetarian” they are perfectly aware of.
There is an element of satisficing to this kind of mediocrity: it is satisficing only on detectable attributes of a thing. Effort minimization explains why this happens: a vegetarian will predictably, and with high certainty, complain about a big, visible slice of meat in a sandwich. So you might as well save effort and get it right the first time. But most vegetarians will not detect chicken stock in soup with other strong flavors. So that’s something you can get away with.
- Excellent restaurants solve for customer delight by optimizing on variables the customer didn’t even know they cared about.
- Good restaurants satisfice a customer’s requirements with sincere good faith, and correct any errors of commission or omission promptly, like honest, by-the-book bureaucrats.
- Mediocre restaurants serve you whatever they can get away with, based on an educated guess about what you might complain about.
- Premium mediocre restaurants throw in some cheap “excellent” flourishes that nobody cares about, so they can claim to be aiming for excellence without actually doing so.
Most of the time, it won’t matter. You could say mediocrity is satisficing behavior against probabilistic expectation of enforced constraints, but that makes it seem way too deliberate.
Indifference as Gravity
Agency and satisficing are emergent aspects of mediocrity, not explicit calculations involved in the generation of mediocre behavior. You don’t set out to be mediocre with the express intention of acquiring agency, or satisficing a constraint set. Mediocrity just happens with low cognitive effort. The engine is indifference.
Mediocrity emerges through feedback-based sublocal optimization: greasing of squeakiest wheels… and indifference towards quiet wheels.
Indifference is the gravity field that allows mediocrity to *seemingly *solve for minimum energy. It is actually a form of agency — a form of choosing not to care about distinctions that don’t make a difference to *you. *Which means unless others have a way of noticing those distinctions and creating incentives via feedback to *make *you care, you will save energy.
You don’t so much solve for the mediocre solution as sag into it under the influence of indifference gravity, the way objects sag into minimum-energy shapes in gravity fields.
This kind of indifference-driven mediocrity is the hallmark of games where one side is playing a finite game and the other side is playing an infinite game that isn’t necessarily evil in the Carse sense of wanting to end the game for the other, but isn’t striving for excellence either.
Every principal-agent game is of this sort. Every sort of moral hazard is marked by the ability of one side to pursue mediocrity rather than excellence. In each case, there is an information asymmetry powering the mediocrity.
So understood in terms of agency, mediocrity in performance is a measure of a player’s refusal to play the game on its nominal terms at all, generally through non-degeneracy in hidden variables that are not active in the nominal game, but contain stored energy.
A couple more observations before we get back to AI.
First, there is a deep relationship between bullshit and mediocrity. Bullshit is indifference to the truth or falsity of statements. Mediocrity is indifference to the violation and compliance of constraints. Where transgression involves deliberately violating constraints, mediocrity doesn’t care whether it is in violation or compliance. Mediocrity is to satisficing and transgression as bullshit is to truth-telling and lying.
Second, there is also a relationship between Taleb’s notion of antifragility, and mediocrity, but it is not a clean one. Sometimes antifragility will point to mediocrity as the way, and other times mediocrity will exhibit antifragility (gaining from uncertainty). But you can have one without the other, and one at the expense of the other. The reason they seem close is that both represent forms of preparedness for unknown unknowns. Mediocrity is the presence of slack, held-back reserves at varying levels of liquidity. Antifragility is a property of certain capabilities.
Let’s get back to the problem I started with, being more mediocre than computers.
Can computers be mediocre at all?
Unfortunately, yes.
The Lebowski Theorem
Joscha Bach recently tweeted a most excellent thought (😆) that he called the The Lebowski theorem (I am guessing it is a reference to The Big Lebowski):

The Lebowski theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function.
Get it?
This is a perfect definition of mediocrity in computational terms, and unfortunately it means computers can be mediocre. And it’s not just a theoretical idea: there are plenty of actual examples of computers hacking their reward functions in unexpected ways and sandbagging the games AI researchers set up for them.
This post, When Algorithms Surprise Us, by Janelle Shane compiles a list of very clever ways algorithms have surprised their creators, mostly by being mediocre where the creator was hoping for excellence. These games that can be gamed are far more interesting to me than Go or Chess.
For instance, there are instances of programs figuring out how to use tiny rounding errors in simulated game environments to violate the simulated law of conservation of energy, and milking the simulation itself for a winning strategy. Like the characters in *The Matrix *bend the laws of physics when inside.
There are instances of the programs actually rewriting the rules of the game itself from the inside, a literal case of Kobayashi Maru.
There are instances of programs respecting the rules of the game while blatantly violating its spirit.
We have serious competition in mediocrity here. So far though, this surprising mediocrity in AIs is just that — surprising. It is not threatening or evolutionarily competitive yet. They are hacking out of their finite game environments, sandbagging performance evaluations, phoning it in, slacking off, gaming the system, cutting corners. Everything human sociopaths do in organizations.
But so far, they’re not doing it quite as well as we do. Computers have learned to be mediocre, but haven’t yet learned to compete at mediocrity out in the open world.
This behavior — AIs hacking their reward functions and surprising us with their mediocrity — suggests that we are still not thinking quite correctly about the nature of AI. A good way to poke at the shortcomings in our understanding is Moravec’s paradox.
Moravec’s Wedge
Moravec’s paradox is an observation based on the history of AI: the problems thought to be hard turned out to be easy, and the problems thought to be easy turned out to be hard.
In the early days of Good Old Fashioned AI (GOFAI), researchers tried to get computers to be more excellent than humans at their most excellent. This meant things like logic, theorem proving and chess-playing. Back in the 50s, when these abilities were thought of as showing humans off at their best — our T-Rex side so to speak — it made sense to try and use computers to beat humans in these domains.
By the 80s it was clear that these were relatively easy problems, and what was actually hard for computers was things we considered trivially simple, like opening a door or walking down the street.
Turned out though, that just needed more horsepower. With deep learning it became clear that Moravec’s paradox was not quite an accurate observation. The so-called “hard” problems were not hard so much as they were just heavy. They just required more brute computing power driving the neural net algorithms. Once Moore’s Law got us there by the 2010s, the “hard” Moravec’s problems began to succumb as well.
So instead of easy and hard regimes of AI problems, we now have two easy regimes. They’re just easy in different ways. GOFAI regime problems yield to sufficiently careful encoding of domain structure and rules. And what you might call Moravec-Hard problems yield to more processors and memory.
These, roughly speaking, map to the two ends of the intelligence spectrum in my opening graphic.
Low intelligence is the rule-based, bureaucratic intelligence of basic automation that can be encoded in the form of relatively simple algorithms where correctness of operation is the key performance metric. Hence the online-forum insult of “go away or I will replace you with a simple shell script.”
This works when the domain can be bounded tightly enough, and in a leak-proof enough way, that no learning from history is necessary. You figure out the general solution, and then execute it. There are no surprises, only execution errors. Anybody (or anything) capable of plug-and-play formulaic behavior can do it. This is bread-and-butter automation and replacing of humans in repetitive tasks with limited learning requirements.
Humans are mediocre at this. Robots and non-learning algorithms do this better because they don’t get bored as easily.
High intelligence, of the sort we tend to describe as prodigal genius, is *also *a case of the domain being bounded in a tight and leak-proof way. The difference is that the enclosed space contains an intractably huge number of possibilities with no general and tractable formula for the right behaviors. Here, learning to recognize patterns from history is key, and depending on how rich and complex your historical library is, your actions will seem more or less like magically intuitive leaps to people with smaller history stores.
Turns out, humans are mediocre at this too. Deep learning algorithms do this better too. AlphaGo at least paid its respects to humans by learning from *their *history with Go. AlphaGoZero rudely threw away human experience altogether, played against itself, and got to performance regimes that seemed magical to human Go players.
And to add insult to injury, it went on to casually do the same to chess, a game that had previously yielded to very painfully engineered GOFAI work, with the Deep Blue type approach relying heavily on the human experience of chess.
But mediocrity *qua *mediocrity? We still have an edge there. Humans are better at just being mediocre, period. Here’s my update to Moravec’s Paradox, which I call Moravec’s Wedge.
The problems that are hard for us are easy for computers. The problems that are easy for us are also easy for computers. What is hard for computers is being mediocre.
Why wedge? Because mediocrity is about slipping in the thin end of the wedge of evolutionary infinite-game advantage into current finite-game performance. Moravec’s wedge is about not playing the game defined by the current cost function with full engagement in order to sneak out of the game altogether and play new games you find in the open environment.
This sheds a whole new light on the Turing test. The challenge which Turing thought was the low-hanging fruit — replicating the mediocre intelligence of a CEO — is actually the hardest. It is the middling kind of intelligence marked by high-agency mediocrity.
Soft and Hard Mediocrity
There’s one last major wrinkle in our portrait of mediocrity.
Remember Douglas Adams’ story of the Golgafrinchans Ark B?
To refresh your memory, the Golgafrinchans got sick of the mediocre people in their midst: “telephone sanitisers, account executives, hairdressers, tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives and management consultants.”
So they convinced these mediocrities that some sort of doomsday was looming and that they had to get off the planet in a big spaceship, the B Ark. The B-Arkers were assured that the rest would follow in the A and C arks. The A Ark would contain all the excellent people, Golgafrinchans at their best: scientists, artists and such. And the C Ark would contain all the people that did the actual work. Of course, the supposed A and C Ark people never left.
They thought they were being clever, getting rid of an entire mediocre, useless third of their population, but in an ironic twist, they are wiped out by a disease that spread through unsanitary telephones.
So as it turned out, only the B Ark people actually survived. A case of survival of the mediocre. In the fictional universe of the *Hitchhiker’s Guide to the Galaxy, *we humans are descended from the B Ark people, who ended up on Earth via some complicated plot twists.
What is interesting though, is Douglas Adams’ enumeration of B-Ark types, which gets at a key feature of mediocrity. There is a difference between what I call soft and hard mediocrity, and most of Adams’ examples are hard-mediocre.
Soft mediocrity is mediocrity revealed through middling performance in domains where A-Ark excellence is actually possible on one end of the performance spectrum, and error-free correct, reliable C-Ark useful performance is possible at the other. So a mediocre chess player, or a sloppy assembly line worker both exhibit soft mediocrity, because both excellence and error-free play are achievable and meaningful.
Hard mediocrity, on the other hand, is performance in domains that are just so open and messy, there is no prevailing notion of excellence or correct, automatable low-end performance at all.
Not surprisingly, hard mediocrity characterizes domains David Graeber characterized as “bullshit jobs.”
There is only one way to be a telephone sanitizer, account executive, or TV producer: a mediocre way. You may be wildly successful and make a lot of money in these domains but it has little to do with meeting clear standards of excellence or error-free functioning. You may even pursue some sort of Zen-like ideal of unacknowledged excellence, but that will seem arbitrary and even eccentric. The point of these jobs is mostly optionality. Mediocrity is the rational performance standard in such domains.
These domains do not fundamentally support a native spectrum of performance where excellence is really meaningful, because nobody really cares enough, and because the boundaries are too messy.
Because here’s the thing: what creates excellence is not that people are *good *at something, but because people *care *enough to be good at something.
On the other end of the spectrum, what creates repeatable, error-free performance is not that people are *good *at it, but that the definitions are tight enough that “error” is well-defined, and people *care *about the errors.
Mediocrity as Subversive Agency
When caring is possible, and some people actively care, not caring represents agency for other people over those who do. And crucially, it is a somewhat power-agnostic form of agency. You can enjoy it even at the bottom of a pyramid. Mediocrity does not just have evolutionary potential, it has subversive, disruptive, evolutionary potential.
A note on the disruption angle.
In disruption theory, a key marker of a disruptor is mediocre or non-existent performance on features the core market cares about. But while disruption always involves mediocrity, mediocrity does not always imply disruption. You would not say, for instance, that winged dinosaurs “disrupted” large flightless dinosaurs. Though they *were *mediocre on some core features (size, speed, Spielberginess) and boasted disruptive marginal features (wings), the forcing function was an asteroid, not disruptive intent. And the evolutionary niche of large land animals is now occupied by elephants, not birds.
But back to general subversion.
What happens when you don’t care about excellence or perfect error-free performance in a domain? You level up and start making trade-offs between performance in that domain, and performance in other domains. This is at the heart of subversive action.
Star Trek, I think embodies this kind of mediocrity very well. Starfleet officers are all B Ark type bureaucratic bullshit-job mediocrities. They are rarely seen excelling at something or being perfect at executing something. Instead, they are constantly cutting corners here, muddling through there, and going with improvised hacks everywhere. And generally putting up a very mediocre performance by the standards of say, Vulcan intelligence, Klingon valor, Ferengi profit maximization, or Borg efficiency. When those non-humans adopt Federation culture, it is most evident in their adoption of mediocrity as an ethos. When they exhibit their “alien” traits, it is usually by regressing to an unfortunate pursuit of excellence in a specific alien way.
This is evident in the bureaucratic nature of how the Federation officers operate: they are constantly rerouting power from one subsystem to another, degrading performance and taking on risk in one area to increase performance and mitigate risk in another. They are all middle managers of an energy and optionality budget. Automated systems work below consistently, and “alien” excellences break out above on occasion.
Starships manage energy, not performance. Starships are deeply lazy. Starfleet captains aim to continue the game, not win every encounter.
One of my favorite examples of this ethos is an episode in TNG where Data goes up against Sirna Kolrami, the galaxy’s most excellent player of the difficult game of Strategema (who is there to advise the crew about their strategy in some war games). Data initially loses, but finally wins by simply dragging the game on, stalling endlessly, until Kolrami forfeits out of frustration.
This is not Deep Blue beating Kasporov. This is not even AlphaGoZero beating all human and AI comers at chess and Go.
This is an AI beating a human at mediocrity, hacking the reward function from outside the game proper, and proving Moravec’s Wedge wrong.
In the same episode, the crew tackle their war game situation with the same ethos (iirc, the war games turn real, and the crew prevail by ignoring Kolrami’s advice)
And this is not just fiction. Data’s strategy of mediocrity is also the essence of guerrilla warfare of any sort. As Kissinger noted, the conventional army loses when it does not win. The guerrilla wins when he does not lose.
That’s what it means to continue playing longer by being more mediocre than others in the field. Generalizing, the reason biological evolution from dinosaurs to humans seems to be driven by survival of the mediocre is that it is always up against an asymmetrically more powerful adversary, the unknowns of nature itself. The guerrilla way is the only way. Mediocrity is the only source of advantage.
Let’s wrap with a final subtlety. It’s not survival of the mediocre, it is survival of the mediocre mediocre.
The Mediocre Mediocre
One of the biggest sources of misconceptions about evolution is the fact that its most popular lay formulation is in the form of a superlative. Survival of the fittest. This leads to two sorts of errors.
The shallow error is to assume *fit *has a static definition in a changing landscape, like *smart *or *beautiful. *It is the sort of error made by your average ugly idiot on the Internet.
This isn’t actually too bad, since at various times, specific legible fitness functions may be good approximations of the fitness function actually induced by the landscape.
The deep error though is to assume the superlative form of the imperative towards fitness. *Fit *and *fittest *are not the same thing. In the gap between the two lies the definition of mediocrity. To pursue mediocrity is to procrastinate on optimizing for the current fitness function because it might change at any time.
This is trickier to do than you might think.
In Douglas Hofstadter’s Metamagical Themas, there is a description of a game (I forget the details) where the goal is not to get the top score, but the average score. The subtlety is that after playing multiple rounds, the overall winner is not the one with the highest total score, but the most average total score. So to illustrate, if Alice, Bob, and Charlie are playing such a game and their scores in a series of 6 games are:
- Alice: 7 5 3 5 6 2
- Bob: 5 8 2 1 9 7
- Charlie: 3 1 5 4 5 5
We have the following outcome. Bob wins game 1, Alice wins game 2, Bob wins game 3, Charlie wins game 4, 5, and 6.
So Alice gets 1 point, Bob gets 2 points, and Charlie gets 3 points. The overall winner is Bob, not Charlie. Charlie is the most mediocre, but Bob is mediocre mediocre. His prize is (perhaps) highest probability of continuing the game.
This is the counterintuive thing about mediocrity: it not only has to be resistant to optimization on external spectra, it has to be self-resistant to optimization. Being the best at being mediocre would defeat the purpose. You have to always be the most mediocre at being mediocre, because there’s always more game-play to come.
One way to remember this is to treat the infinite game of evolutionary success as a sort of Zeno’s paradox turned around. You never reach the finish line because when you’re mediocre, you only take a step that’s halfway to the finish, so there’s always more room left to continue the game.
That’s how you can consistently exist in the current finite game, and leave yourself open to the surprises (and the possibility of being surprising) in games that don’t yet exist that you don’t know you’re already playing.
And that’s how you continue playing.