Published on February 3, 2026 1:22 PM GMT
Today’s humanity faces many high stakes and even existential challenges; many of the largest are generated or exacerbated by AI. Meanwhile, humans individually and humanity collectively appear distressingly underequipped.
Lots of folks around here naturally recognise that this implies a general strategy: make humans individually — and humanity collectively — better able to solve problems. Very good! (Complementary strategies look like: make progress directly, raise awareness of the challenges, recruit problem solvers, …)
One popular approach is to ‘raise the sanity waterline’ in the most oldschool and traditional way: have a community of best practice, exemplify and proselytise, make people wiser one by one and Published on February 3, 2026 1:22 PM GMT Today’s humanity faces many high stakes and even existential challenges; many of the largest are generated or exacerbated by AI. Meanwhile, humans individually and humanity collectively appear distressingly underequipped. Lots of folks around here naturally recognise that this implies a general strategy: make humans individually — and humanity collectively — better able to solve problems. Very good! (Complementary strategies look like: make progress directly, raise awareness of the challenges, recruit problem solvers, …) One popular approach is to ‘raise the sanity waterline’ in the most oldschool and traditional way: have a community of best practice, exemplify and proselytise, make people wiser one by one and society wiser by virtue of that. There’ve been some successes, not least the existence of this forum and some of its membership. Another popular approach is to imagine augmenting ourselves in the most futuristic and radical ways: genetic engineering, selective breeding, brain-augmenting implants, brain emulation. Go for it, I suppose (mindful of the potential backfires and hazards). But these probably won’t pan out on what look like the necessary timelines. There is a middle ground! Use tech to uplift ourselves, yes — but don’t wait for medical marvels and wholesale self-reauthorship. Just use the building blocks we have, anticipate the pieces we might have soon, and address our individual and collective shortcomings one low-hanging fruit at a time.
[1]
The most exciting part is that we’ve got some nifty new building blocks to play with: big data, big compute, ML, and (most novel of all) foundation models and limited agentic AI. One place people fall down here is getting locked into asking: ‘OK, what can I usefully ask this AI to do?’. Sometimes this is helpful. But usually it’s missing the majority of the design space: agentic form factors are only a very narrow slice of what we can do with technology, and for many purposes they’re not even especially desirable. Think about human reasoning. ‘Human’ as in individuals, groups, teams, society, humanity at large. ‘Reasoning’ as in the full decision-making cycle, from sensing and understanding through to planning and acting, including acting together. I like to first ask: ‘What human reasoning activities are in bad shape?’ Think of a particular audience with either the scale or the special influence to make a difference (this can include ‘the general public’), and the deficits they have in these reasoning activities. Now ask: ‘What kinds of software
[2]
might help and encourage people to do those better?’. Think seriously about backfire: we don’t want to differentially enable bad human actors or rogue AI to reason and coordinate! As Richard Rumelt, author of Good Strategy/Bad Strategy observes, The idea that coordination, by itself, can be a source of advantage is a very deep principle. Coordination’s dark side is collusion, including cartels, oligarchy, and concentration of power, in imaginable extreme cases cutting out most or even all humans. Similarly, epistemic advantage (in foresight and strategy, say) can be parlayed into resource or influence advantage. If those can be converted in turn into greater epistemic advantage (by employing compute for epistemic attacks or in further private epistemic advancement) without commensurate counterweights or defences, this could be quite problematic. How to think about these backfire principles in general, and the considerations on particular candidate projects, are among the pieces I think this forum could be especially good at. Part of it is about choosing distribution strategies which reduce misuse surface area (or provide antidotes), and part of it is about preferring tech which asymmetrically supports (and perhaps encourages) ‘good’ use and behaviour. FLF’s fellows, and I and others have been doing some of this exploration recently. Stay tuned for more. Meanwhile, join in! We’re early in a critical period where much is up for grabs and what we build now might help shape and inform the choices humanity makes about its future (or whether it makes much choice at all). Try things, see what kinds of tools earn the attention and adoption that matters, and share what you learn. Consider principles to apply, especially for minimising backfire risks, and share particular considerations for or against certain kinds of tech and audience targets. A close relative of this strategy is cyborgism. I might contrast what I’m centrally describing as being more outward-looking, asking how we can uplift the most important sensemaking and wisdom apparatus of humanity in general, whereas cyborgism maybe looks centrally more like a bet on becoming the uplifted paragons (optionally thence, and thereby, saving the world). I’d say these are complementary on the whole. ↩︎ This is better than asking ‘What kinds of AI…’. Software is the general, capability-unlocking and -enhancing artefact. AI components and form-factors are novel, powerful, sometimes indispensable building blocks in our inventory to compose software capabilities out of. ↩︎How to generate useful ideas in human reasoning
Finding flaws and avoiding backfire
Do it
Discuss