Published on September 23, 2025 2:15 AM GMT

When I hear a lot of people talk about Slow Takeoff, many of them seem like they are mostly imagining the early part of that takeoff – the part that feels human comprehensible. They’re still not imagining superintelligence in the limit.

There are some genres of Slow Takeoff that culminate in somebody “leveraging controlled AI to help fully solve the alignment problem, eventually get fully aligned superintelligence, and then end the acute risk period.” 

But the sort of person I’m thinking of, for this blogpost, usually doesn’t seem to have a concrete visualization of something that could plausibly end the period where anyone could choose to deploy uncontrolled superintelligence. They tend to not like Coherent Extrapola…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help