Published on November 13, 2025 7:56 AM GMT
People sometimes make mistakes. (Citation Needed)
The obvious explanation for most of those mistakes is that people do not have access to sufficient information to avoid the mistake, or are not smart enough to think through the consequences of their actions.
This predicts that as decision-makers get access to more information, or are replaced with smarter people, their decisions will get better.
And this is substantially true! Markets seem more efficient today than they were before the onset of the internet, and in general decision-making across the board has improved on many dimensions.
But in many domains, I posit, decision-making has gotten worse, despite access to more information, and despite much lar…
Published on November 13, 2025 7:56 AM GMT
People sometimes make mistakes. (Citation Needed)
The obvious explanation for most of those mistakes is that people do not have access to sufficient information to avoid the mistake, or are not smart enough to think through the consequences of their actions.
This predicts that as decision-makers get access to more information, or are replaced with smarter people, their decisions will get better.
And this is substantially true! Markets seem more efficient today than they were before the onset of the internet, and in general decision-making across the board has improved on many dimensions.
But in many domains, I posit, decision-making has gotten worse, despite access to more information, and despite much larger labor markets, better education, the removal of lead from gasoline, and many other things that should generally cause decision-makers to be more competent and intelligent. There is a lot of variance in decision-making quality that is not well-accounted for by how much information actors have about the problem domain, and how smart they are.
I currently believe that the factor that explains most of this remaining variance is “paranoia”. In-particular the kind of paranoia that becomes more adaptive as your environment gets filled with more competent adversaries. While I am undoubtedly not going to succeed at fully conveying why I believe this, I hope to at least give an introduction into some of the concepts I use to think about it.
A market for lemons
The simplest economic model of paranoia is the classical “lemon’s market”:
In the classical lemon market story you (and a bunch of other people) are trying to sell some used cars, and some other people are trying to buy some nice used cards, and everyone is happy making positive-sum trades. Then a bunch of defective used cars (“lemons”) enter the market, which are hard to distinguish from the high-quality used cars since the kinds of issues that used cars have are hard to spot.
Buyers adjust their willingness to pay downwards as the average quality of car in your market goes down. This causes more of the high-quality sellers to leave the market as they no longer consider their car worth selling at that lower price. This further reduces the average willingness to pay of the buyers, which in turn drives more high-quality sellers out of the market. In the limit, only lemons are sold.
In this classical model, a happily functioning market where both buyers and sellers are happy to trade, generating lots of surplus for everyone involved, can be disrupted or even completely destroyed[1] by the introduction of a relatively small number of adversarial sellers who sell sneakily low-quality goods. From the consumer side, this looks like one day you having a fine and dandy time buying used cars, and the next day being presented with a large set of deals so suspiciously good that you know you something is wrong (and you are right).
Buying a car in a lemon’s market is a constant exercise of trying to figure out how the other person is trying to fuck you over. If you see a low offer for a car, this is evidence both that you got a great deal, and evidence that the counterparty knows something that you don’t that they are using to fuck you over. If the latter outweighs the former, no deal happens.
For some reason, understanding this simple dynamic is surprisingly hard for people to come to terms with. Indeed, the reception section of the Wikipedia article for Akerlof’s seminal paper on this is educational:
Both the American Economic Review and the Review of Economic Studies rejected the paper for “triviality”, while the reviewers for Journal of Political Economy rejected it as incorrect, arguing that, if this paper were correct, then no goods could be traded.[4] Only on the fourth attempt did the paper get published in Quarterly Journal of Economics.[5] Today, the paper is one of the most-cited papers in modern economic theory and most downloaded economic journal paper of all time in RePEC (more than 39,275 citations in academic papers as of February 2022).[6] It has profoundly influenced virtually every field of economics, from industrial organisation and public finance to macroeconomics and contract theory.
(You know that a paper is good if it gets rejected both for being “trivial” and “obviously incorrect”)
All that said, in reality, navigating a lemon market isn’t too hard. Simply inspect the car to distinguish bad cars from good cars, and then the market price of a car will at most end up at the pre-lemon-seller equilibrium, plus the cost of an inspection to confirm it’s not a lemon. Not too bad.
“But hold on!” the lemon car salesman says. “Don’t you know? I also run a car inspection business on the side”. You nod politely, smiling, then stop in your tracks as the realization dawns on you. “Oh, and we also just opened a certification business that certifies our inspectors as definitely legitimate” he says as you look for the next flight to the nearest communist country.
It’s lemons all the way down
What do you do in a world in which there are not only sketchy used car salesmen, but also sketchy used car inspectors, and sketchy used car inspector rating agencies, or more generally, competent adversaries who will try to predict whatever method you will use to orient to the world, and aim to subvert it for their own aims?
As far as I can tell the answer is “we really don’t know, seems really fucking hard, sorry about that”. There are no clear solutions to what to do if you are in an environment with other smart actors[2] who are trying to predict what you are going to do and then try to feed you information to extract resources from you. Decision theory and game theory are largely unsolved problems, and most adversarial games have no clear solution.
But clearly, in-practice, people deal with it somehow. The rest of this post is about trying to convey what it feels like to deal with it, and what it looks like from the outside. These “solutions”, while often appropriate, also often look insane, and that insanity explains a lot of how the world has failed to get better, even as we’ve gotten smarter and better informed, as these strategies often involve making yourself dumber in order to make yourself less exploitable, and these strategies become more tempting the smarter your opponents are.
Fighter jets and OODA loops
John Boyd, a US Air Force Colonel, tried to predict what determines who wins fighter jet dogfights. In the pursuit of that he spent 30 years publishing research reports and papers and training recruits, ultimately culminating in his model of the “OODA loop”.
In this model, a fighter jet pilot is engaging in a continuous loop of: Observe, Orient, Decide, Act. This loop usually plays out over a few seconds as the fighter observes new information, orients towards this new environment, makes a decision on how to respond, and ultimately acts. Then they observe again (both the consequences of their own actions and of their opponent), orient again, etc.
What determines (according to Boyd) who wins in a close dogfight is which fighter can “get into” the other fighters OODA loop.
If you can…
- Take actions that are difficult to observe…
- Harder to orient to…
- And act before your opponent has decided on their next action
You will win the fight. Or as Boyd said “he who can handle the quickest rate of change survives”. And to his credit, the formal models of fighter-jet maneuverability he built on the basis of this theory have (at least according to Wikipedia) been one of the guiding principles of modern fighter jet design including the F-15 and F-16 and are widely credited with determining much of modern battlefield strategy.
Beyond the occasional fighter-jet dogfight I get into, I find this model helpful for understanding the subjective experience of paranoia in a wide variety of domains. You’re trying to run your OODA loop, but you are surrounded by adversaries who are simultaneously trying to disrupt your OODA loop while trying to speed up their own. When they get into your OODA loop, it feels like you are being puppeted by your adversary, who can predict what you are going to do faster than you can adapt.
The feeling of losing is a sense of disorientation and confusion and constant reorienting as reality changes more quickly than you can orient to, combined with desperate attempts to somehow slow down the speed at which your adversaries are messing with you.
There are lots of different ways people react to adversarial information environments like this, but at a high level, my sense is there are roughly three big strategies:
- You blind yourself to information
- You try to eliminate the sources of the deception
- You try to become unpredictable
All three of those produce pretty insane-looking behavior from the outside, yet I think are by-and-large an appropriate response to adversarial environments.
The first thing you try is to blind yourself
When a used car market turns into a lemon’s market, you don’t buy a used car. When you are a general at war with a foreign country, and you suspect your spies are compromised and feeding you information designed to trick you, you just ignore your spies. When you are worried about your news being the result of powerful political egregores aiming to polarize you into political positions, you stop reading the news.
At the far end of paranoia lives the isolated hermit. The trees and the butterflies are (mostly) not trying to deceive you, and you can just reason from first principles about what is going on with the world.
While the extreme end of this is costly, we see a lot of this in more moderate form.
My experience of early-2020 COVID involved a good amount of blinding myself to various sources of information. In January, as the pandemic was starting to become an obvious problem in the near future, the discussion around COVID picked up. Information quality wasn’t perfect, but overall, if you were looking to learn about COVID, or respiratory diseases in general, you would have a decent-ish time. Indeed, much of the research I used to think about the likely effects of COVID early on in the pandemic was directly produced by the CDC.
Then, the pandemic became obvious to the rest of the world, and a huge number of people started having an interest in shaping what other people believed about COVID. The CDC started lying about the effectiveness of masks to convince people to stop using them so service workers would have access to them as political pressure on them mounted. Large fractions of society started wiping down every surface and trying to desperately produce evidence that rationalized this activity. Most channels that people relied on for reliable health information became a market for lemons as forces of propaganda drowned out the people still aiming to straightforwardly inform.
I started ignoring basically anything the CDC said. I am sure many good scientists still worked there, but I did not have the ability to distinguish the good ones from the bad ones. As the adversarial pressure rose, I found it better to blind myself to that information.
The general benefits to blinding yourself to information in adversarial environments are so commonly felt, and so widely appreciated, that constraining information channels is a part of almost every large social institution:
U.S. courts extensively restrict what evidence can be shown to juries
A lot of US legal precedent revolves around the concept of “admissible evidence”, and even furthermore “admissible argument”. We are paranoid about juries getting tricked, so we blind juries to most evidence relevant to the case we are asking them to judge, hoping to shield them from getting tricked and controlled by the lawyers of either side, but still leave enough information available to usually make adequate judgements.
Nobody is allowed to give legal or medical advice
While much of this is the result of regulatory capture, we still highly restrict the kind of information that people are allowed to give others on many of the topics that matter most to people. Both medical advice and legal advice are categories where we only allow certified experts to speak freely, and even there, we only do so in combination with intense censure if the advice later leads to bad consequences for the recipients.
Within governments, the “official numbers” are often the only things that matter
The story of CIA analyst Samuel Adams and his attempts at informing the Johnson administration about the number of opponents the US was facing in the Vietnam war is illustrative here. As Adams tells the story himself as he found what appeared to him very strong evidence of Vietnamese forces being substantially more numerous than previously assumed (600,000 vs. 250,000 combatants):
Dumbfounded, I rushed into George Carver’s office and got permission to correct the numbers. Instead of my own total of 600,000, I used 500,000, which was more in line with what Colonel Hawkins had said in Honolulu. Even so, one of the chief deputies of the research directorate, Drexel Godfrey, called me up to say that the directorate couldn’t use 500,000 because “it wasn’t official.”
[…]
The Saigon conference was in its third day, when we received a cable from Helms that, for all its euphemisms, gave us no choice but to accept the military’s numbers. We did so, and the conference concluded that the size of the Vietcong force in South Vietnam was 299,000.
[…]
A few days after Nixon’s inauguration, in January 1969, I sent the paper to Helms’s office with a request for permission to send it to the White House. Permission was denied in a letter from the deputy director, Adm. Rufus Taylor, who informed me that the CIA was a team, and that if I didn’t want to accept the team’s decision, then I should resign.
When governments operate on information in environments where many actors have reasons to fudge the numbers in their direction, they highly restrict what information is a legitimate basis for arguments and calculations, as illustrated in the example above.
The second thing you try is to purge the untrustworthy
The next thing to try is to weed out the people trying to deceive you. This… sometimes goes pretty well. Most functional organizations do punish lying and deception quite aggressively. But catching sophisticated deception or disloyalty is very hard. Mccarthyism and the second red scare stands as an interesting illustration:
President Harry S. Truman’s Executive Order 9835 of March 21, 1947, required that all federal civil-service employees be screened for “loyalty”. The order said that one basis for determining disloyalty would be a finding of “membership in, affiliation with or sympathetic association” with any organization determined by the attorney general to be “totalitarian, fascist, communist or subversive” or advocating or approving the forceful denial of constitutional rights to other persons or seeking “to alter the form of Government of the United States by unconstitutional means”.[10]
What became known as the McCarthy era began before McCarthy’s rise to national fame. Following the breakdown of the wartime East-West alliance with the Soviet Union, and with many remembering the First Red Scare, President Harry S. Truman signed an executive order in 1947 to screen federal employees for possible association with organizations deemed “totalitarian, fascist, communist, or subversive”, or advocating “to alter the form of Government of the United States by unconstitutional means.”
At some point, when you are surrounded by people feeding you information adversarially and sabotaging your plans, you just start purging people until you feel like you know what is going on again.
This can again look totally insane from the outside, with lots of innocent people getting caught in the crossfire and a lot of distress and flailing.
But it’s really hard to catch all the spies if you are indeed surrounded by lots of spies! The story of the Rosenbergs during this time period illustrates this well:
Julius Rosenberg (May 12, 1918 – June 19, 1953) and Ethel Rosenberg (born Greenglass; September 28, 1915 – June 19, 1953) were an American married couple who were convicted of spying for the Soviet Union, including providing top-secret information about American radar, sonar, jet propulsion engines, and nuclear weapon designs. They were executed by the federal government of the United States in 1953 using New York’s state execution chamber in Sing Sing in Ossining,[1] New York, becoming the first American civilians to be executed for such charges and the first to be executed during peacetime.
The conviction of the Rosenbergs resulted in enormous national pushbacks to Mccarthyism, with it playing a big role in the formation of its legacy as a period of political overreach and undue paranoia:
After the publication of an investigative series in the National Guardian and the formation of the National Committee to Secure Justice in the Rosenberg Case, some Americans came to believe both Rosenbergs were innocent or had received too harsh a sentence, particularly Ethel. A campaign was started to try to prevent the couple’s execution. Between the trial and the executions, there were widespread protests and claims of antisemitism. At a time when American fears about communism were high, the Rosenbergs did not receive support from mainstream Jewish organizations. The American Civil Liberties Union did not find any civil liberties violations in the case.[37]
Across the world, especially in Western European capitals, there were numerous protests with picketing and demonstrations in favor of the Rosenbergs, along with editorials in otherwise pro-American newspapers. Jean-Paul Sartre, an existentialist philosopher and writer who won the Nobel Prize for Literature, described the trial as “a legal lynching”.[38] Others, including non-communists such as Jean Cocteau and Harold Urey, a Nobel Prize-winning physical chemist,[39] as well as left-leaning figures—some being communist—such as Nelson Algren, Bertolt Brecht, Albert Einstein, Dashiell Hammett, Frida Kahlo, and Diego Rivera, protested the position of the American government in what the French termed the American Dreyfus affair.[40] Einstein and Urey pleaded with President Harry S. Truman to pardon the Rosenbergs. In May 1951, Pablo Picasso wrote for the communist French newspaper L’Humanité: “The hours count. The minutes count. Do not let this crime against humanity take place.”[41] The all-black labor union International Longshoremen’s Association Local 968 stopped working for a day in protest.[42] Cinema artists such as Fritz Lang registered their protest.[43]
Many decades later, in 1995, as part of the release of declassified information, the public received confirmation that the Rosenbergs were indeed spies:
The Venona project was a United States counterintelligence program to decrypt messages transmitted by the intelligence agencies of the Soviet Union. Initiated when the Soviet Union was an ally of the U.S., the program continued during the Cold War when it was considered an enemy.[67] The Venona messages did not feature in the Rosenbergs’ trial, which relied instead on testimony from their collaborators, but they heavily informed the U.S. government’s overall approach to investigating and prosecuting domestic communists.[68]
In 1995, the U.S. government made public many documents decoded by the Venona project, showing Julius Rosenberg’s role as part of a productive ring of spies.[69] For example, a 1944 cable (which gives the name of Ruth Greenglass in clear text) says that Ruth’s husband David is being recruited as a spy by his sister (that is, Ethel Rosenberg) and her husband. The cable also makes clear that the sister’s husband is involved enough in espionage to have his own codename (“Antenna” and later “Liberal”).[70] Ethel did not have a codename;[26] however, KGB messages which were contained in the Venona project’s Alexander Vassiliev files, and which were not made public until 2009,[71][72] revealed that both Ethel and Julius had regular contact with at least two KGB agents and were active in recruiting both David Greenglass and Russell McNutt.[73][71][72]
Turns out, it’s really hard to prove that someone is a spy. Trying to do so anyway often makes people more paranoid, which produces more intense immune reactions and causes people to become less responsive to evidence, which then breeds more adversarial intuitions and motivates more purges.
But to be clear, a lot of the time, this is a sane response to adversarial environments. If you are a CEO appointed to lead a dysfunctional organization, it is quite plausibly the right call to get rid of basically all staff who have absorbed an adversarial culture. Just be extremely careful to not purge so hard as to only be left with a pile of competent schemers.
The third thing to try is to become unpredictable and vindictive
And ultimately, if you are in a situation where an opponent keeps trying to control your behavior and get into your OODA, you can always just start behaving unpredictably. If you can’t predict what you are going to do tomorrow, your opponents can’t either.
Nixon’s mad dog strategy stands as one interesting testament to this:
I call it the Madman Theory, Bob. I want the North Vietnamese to believe I’ve reached the point where I might do anything to stop the war. We’ll just slip the word to them that, “for God’s sake, you know Nixon is obsessed about communism. We can’t restrain him when he’s angry—and he has his hand on the nuclear button” and Ho Chi Minh himself will be in Paris in two days begging for peace.
Controlling an unpredictable opponent is much harder than an opponent who in their pursuit of taking optimal and sane-looking actions ends up behaving quite predictably. Randomizing your strategies is a solution to many adversarial games, and in reality, making yourself unpredictable in what information you will integrate and which you will ignore can force, and where your triggers are for starting to use some real force, often gives your opponent no choice but to be more conservative, or ease the pressure, or aim to manipulate so much information that even randomization doesn’t save you.
Now, where does this leave us? Well, first of all, I think it helps explain a bunch of the world and allows us to make better predictions about how the future will develop.
But I think more concretely, I think it motivates a principle I hold very dear to my heart: “Do not be the kind of actor that forces other people to be paranoid”.
Paranoid people fuck up everything around them. Digging yourself out of paranoia is very hard and takes a long time. A non-trivial fraction of my life philosophy is oriented around avoiding environments that force me into paranoia and incentivizing as little paranoia as possible in the people around me.
Hopefully I will get to write more about some of the things I have learned in the future.
The naive application of the Akerlof model predicts a market with zero volume! No peaches get traded at all, despite of course an enormous number of positive-sum trades being hypothetically available.
Or maybe even actors much smarter and more determined than you
Discuss