Published on November 3, 2025 8:49 AM GMT
I applaud Eliezer for trying to make himself redundant, and think it’s something every intellectually successful person should spend some time and effort on. I’ve been trying to understand my own “edge” or “moat”, or cognitive traits that are responsible for whatever success I’ve had, in the hope of finding a way to reproduce it in others, but I’m having trouble understanding a part of it, and try to describe my puzzle here. For context, here’s an earlier EAF comment explai…
Published on November 3, 2025 8:49 AM GMT
I applaud Eliezer for trying to make himself redundant, and think it’s something every intellectually successful person should spend some time and effort on. I’ve been trying to understand my own “edge” or “moat”, or cognitive traits that are responsible for whatever success I’ve had, in the hope of finding a way to reproduce it in others, but I’m having trouble understanding a part of it, and try to describe my puzzle here. For context, here’s an earlier EAF comment explaining my history/background and what I do understand about how my cognition differs from others.[1]
More Background
In terms of raw intelligence, I think I’m smart but not world-class. My SAT was only 1440, 99th percentile at the time, or equivalent to about 135 IQ. (Intuitively this may be an underestimate and I’m probably closer to 99.9th percentile in IQ.) I remember struggling to learn the GNFS factoring algorithm, and then meeting another intern at a conference who had not only mastered it in the same 3 months that I had, but was presenting an improvement on the SOTA. (It generally seemed like cryptography research was full of people much smarter than myself.) I also considered myself lazy or not particularly hardworking compared to many of my peers, so didn’t have especially high expectations for myself.
(An illustration of this is that when I, as a freshman CS major, became worried about eventual AI takeover after reading Vernor Vinge’s A Fire Upon the Deep, I thought I wasn’t smart or conscientious enough to contribute to a core field like AI safety, i.e., that there would eventually be plenty of people much smarter and harder working than me contributing to it. As a result I didn’t even take any AI courses, but instead decided to focus my education and career on applied cryptography, as a way to contribute to reducing AI x-risk from the periphery, by increasing overall network security.)
The Puzzle
It seems safe to say that I exceeded[2] my own expectations, and looking back, the main thing that appears to have happened is that I had exceptional intuitions about what problems/fields/approaches were important and promising, and then used my high but not world-class intelligence to pick off some low hanging fruits or stake out some positions destined to become popular later. Others ignored them for a long time, even after I published my ideas. In several cases they were ignored for so long that I had given up hope of getting significant validation or positive feedback for them, until they were eventually rediscovered and/or made popular by others.
The questions that currently puzzle me:
- Do I (or did I) have a real cognitive ability, or is there a non-cognitive explanation, or just luck? (One hypothesis that’s hard to rule out but not very productive is that I’m in a game or simulation.)
 - If I do, how does it work and why is it so rare? It seems hard to explain using anything we know from cognitive science. Standard explanations for good intuitions include that they’re distilled from extensive prior experience or reasoning, but I moved from field to field and as a result was often a newcomer.
 - Not only is it rare but there seems to be a surprisingly large gap between my intuitions and the next closest person’s. For example I’ve been talking about how philosophical problems are likely to be a bottleneck for AI alignment/x-safety for more than 2 decades, while others until very recently have either ignored this line of thought, or think they have some ready solution for metaphilosophy or AI philosophical competence (that they either don’t write down in enough detail for me to evaluate, or just don’t seem very good to me). Similarly, with b-money, my pre-LW proto-UDT ideas, and my early position that stopping AI development and increasing human intelligence should be plan A, I was intellectually almost completely alone for many years.[3]
 - Are there others who could make a similar claim of having exceptionally good and hard to explain intuitions, but have/had different interests from me, so I’ve never heard of them?
 
A Plausible Answer?
It occurs to me as I’m writing this, that maybe what I have (or had) is not exceptionally good intuitions, but good judgment that comes from a relatively high baseline reasoning ability and knowledge base, buffed by a lack of the usual cognitive distortions, specifically overconfidence (which leads to a tendency to latch onto the first seemingly good idea that one thinks of, instead of being self-skeptical and trying hard to find flaws in one’s own ideas) and institutional pressures/incentives that result from one’s employment.
My self-skepticism probably came from the early career in cryptography, where often the only way to minimize risk of public humiliation is to scrupulously examine one’s own proposals for potential flaws, and overconfidence is quickly punished. Security proofs are often not possible or themselves potentially flawed, e.g. due to use of wrong assumptions or models. Also, the flaws are often extremely subtle and difficult to find, but hard to deny once pointed out, further incentivizing self-skepticism and scrutiny.
My laziness may have paradoxically helped, by causing me to avoid joining the usual institutions that someone with my interests might have joined (e.g. academia and other research institutes) to instead pursue a “pressure-free” life of thinking about whatever I want to think about, saying whatever I want to say.
(This life probably has its own cognitive distortions, e.g., related to status games that people play in online discussion forums, but perhaps they’re different enough from the usual cognitive distortions that I was able to see a bunch of blind spots that other people couldn’t see.)
Re-reading my 2-year-old EAF comment (copied as footnote [1] below), I had already mentioned my self-skepticism and financial/organizational independence as factors for my intellectual success, but apparently still felt like there was a puzzle to be explained. Perhaps the main realization/insight of this post is that the effect size from a combination of these 2 factors could be large enough to explain/constitute all or most of my “edge”, and there may not be a further mystery of “exceptionally good intuitions” that needs to be explained.
I’ll probably keep thinking about this topic, and welcome any thoughts or perspectives from others. It’s also not quite clear what practical advice to draw from this, assuming my “plausible answer” is true. It seems impractical to recommend that someone spend a few years in cryptography, but I’m not sure if anything less onerous than that would have a similar effect, nor can I say with any confidence that even such experience will produce the same kind of general and deep-seated self-skepticism that it apparently did in me. Being financially/organizationally independent also seems impractical or too costly for most people to seriously pursue. I would welcome any suggestions on this front (of practical advice) as well.
One implication that occurs to me is that if the advantages of these cognitive traits accumulate multiplicatively (as they seem to), then the cost of gaining the last piece of the puzzle might be well worth paying for someone who already has the others. E.g., if someone already has a >99th percentile IQ, wide-ranging intellectual background and interests, and one of self-skepticism and independence, then the marginal value of gaining the other trait might be very high and hence worth its cost.
A flip side of this analysis is that the detrimental effects of the aforementioned cognitive distortions might be much higher than is usually supposed or realized, perhaps sometimes causing multi-year/decade delays in important approaches and conclusions, and can’t be overcome by others even with significant IQ advantages over me. This may be a crucial strategic consideration, e.g., implying that the effort to reduce x-risks by genetically increasing human intelligence may be insufficient without other concomitant efforts to reduce such distortions.
Copying here for completeness/archival purposes:
I thought about this and wrote down some life events/decisions that probably contributed to becoming who I am today.
- Immigrating to the US at age 10 knowing no English. Social skills deteriorated while learning language, which along with lack of cultural knowledge made it hard to make friends during teenage and college years, which gave me a lot of free time that I filled by reading fiction and non-fiction, programming, and developing intellectual interests.
 - Was heavily indoctrinated with Communist propaganda while in China, but leaving meant I then had no viable moral/philosophical/political foundations. Parents were too busy building careers as new immigrants and didn’t try to teach me values/traditions. So I had a lot of questions that I didn’t have ready answers to, which perhaps contributed to my intense interest in philosophy (ETA: and economics and game theory).
 - Had an initial career in cryptography, but found it a struggle to compete with other researchers on purely math/technical skills. Realized that my comparative advantage was in more conceptual work. Crypto also taught me to be skeptical of my own and other people’s ideas.
 - Had a bad initial experience with academic research (received nonsensical peer review when submitting a paper to a conference) so avoided going that route. Tried various ways to become financially independent, and managed to “retire” in my late 20s to do independent research as a hobby.
 
A lot of these can’t really be imitated by others (e.g., I can’t recommend people avoid making friends in order to have more free time for intellectual interests). But here are some practical advice I can think of:
- Try to rethink what your comparative advantage really is.
 - I think humanity really needs to make faster philosophical progress, so try your hand at that even if you think of yourself as more of a technical person. Same may be true for solving social/coordination problems. (But see next item.)
 - Somehow develop a healthy dose of self-skepticism so that you don’t end up wasting people’s time and attention arguing for ideas that aren’t actually very good.
 - It may be worth keeping an eye out for opportunities to “get rich quick” so you can do self-supported independent research. (Which allows you to research topics that don’t have legible justifications or are otherwise hard to get funding for, and pivot quickly as the landscape and your comparative advantage both change over time.)
 
ETA: Oh, here’s a recent LW post where I talked about how I arrived at my current set of research interests, which may also be of interest to you.
Copying my main accomplishments here:
- Created the first general purpose open source cryptography programming library (Crypto++, 1995), motivated by AI risk and what’s now called “defensive acceleration”.
 - Published one of the first descriptions of a cryptocurrency based on a distributed public ledger (b-money, 1998), predating Bitcoin.
 - Proposed UDT, combining the ideas of updatelessness, policy selection, and evaluating consequences using logical conditionals.
 - First to argue for pausing AI development based on the technical difficulty of ensuring AI x-safety (SL4 2004, LW 2011).
 - Identified current and future philosophical difficulties as core AI x-safety bottlenecks, potentially insurmountable by human researchers, and advocated for research into metaphilosophy and AI philosophical competence as possible solutions.
 
- ^
With the notable exceptions of Nick Szabo who invented his BitGold at nearly the same time as b-money, Cypherpunks who thought b-money was interesting/promising but didn’t spend much effort developing it further, and Hal Finney who perhaps paid the most attention to my ideas pre-LW, including by developing RPOW, trying to understand my early decision theory ideas, and writing up UDASSA in a publicly presentable form.
 
Discuss