I just finished this excellent book: If Anyone Builds It, Everyone Dies.

This book influenced my opinions on p(doom). Before reading the book, I was uncertain about whether AI could pose an existential risk for humanity. After reading the book, I’m starting to entertain the possibility that the probability of doom from a superintelligent AI is above zero. I’m still not sure where I would put my own p(doom) but it’s definitely non-zero.


The question I’m interested in is this:

Is there a plausible scenario of human extinction from further AI developments?

As the founder of a…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help