Q&A

An interview with Holly Elmore, a leading proponent of a non-technical solution to the existential risks of AI technology.

26 Dec 2025 — 16 min read

Since AI was first conceived of as a serious technology, some people wondered whether it might bring about the end of humanity. For some, this concern was simply logical. Human individuals have caused catastrophes throughout history, and powerful AI, which would not be bounded in the same way, might therefore pose even worse dangers.

In recent times, as the capabilities of AI have grown larger, one might have thought that its existential risks would also have become more obvious in nature. And in some ways, they have. It is increasingly easy to see how AI could pose severe risks now that i…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help