Published on November 10, 2025 4:53 AM GMT
In the discussion of AI safety and the existential risk that ASI poses to humanity, I think timelines aren’t the right framing. Or at least, they often distract from the critical point: It doesn’t matter if ASI arrives in 5 years time or in 20 years time, it only matters that it arrives during your lifetime[1]. The risks due to ASI are completely independent of whether they arrive during this hype-cycle of AI, or whether there’s another AI winter, progress stalls for 10 years, but then ASI is built after that winter has passed. If you a…
Published on November 10, 2025 4:53 AM GMT
In the discussion of AI safety and the existential risk that ASI poses to humanity, I think timelines aren’t the right framing. Or at least, they often distract from the critical point: It doesn’t matter if ASI arrives in 5 years time or in 20 years time, it only matters that it arrives during your lifetime[1]. The risks due to ASI are completely independent of whether they arrive during this hype-cycle of AI, or whether there’s another AI winter, progress stalls for 10 years, but then ASI is built after that winter has passed. If you are convinced that ASI is a catastrophic global risk to humanity, the timelines don’t matter and are somewhat inconsequential, the only thing that matters is 1. we have no idea how we could make something smarter than ourselves without it also being an existential threat, and 2. we can start making progress on this field of research today.
So ultimately, I’m uncertain about whether we’re getting AI in 2 years or 20 or 40. But it seems almost certain that we’ll be able to build ASI within my lifetime[2]. And if that’s the case, nothing else really matters besides making sure that humanity equally realises the benefits of ASI without it also killing us all due to our short-sighted greed.
or the lifetime of the people you care about, which might include all future humans.
Concretely, before 2080
Discuss