Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🌋 Existential Risk Research
x-risk, CSER, FHI, catastrophic risk, civilizational risk
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
7136
posts in
15.7
ms
Hot Research
Topics
in AI and ML in 2026 and Their
Philosophical
Connections
🕵️
AI Agents
omseeth.github.io
·
5d
·
Hacker News
Open
Problems
in
Frontier
AI Risk Management
🛡️
AI Safety
arxiv.org
·
20h
The
Gate
Test: Why Human-in-the-Loop
Fails
and How to Fix It
🤝
Human-AI Collaboration
jitera.com
·
3d
·
Hacker News
Why the Future Doesn't Need Us (Bill
Joy
, 2000)
⚠️
Existential Risk
web.archive.org
·
4d
·
Lobsters
,
Hacker News
Revisiting the
Outcome
Distortion
Complex (5 minute read)
🌱
Bootstrapping
investing101.substack.com
·
4d
·
Substack
Risk Models as Mediating Artifacts: A
Postphenomenological
Analysis of the
CIIM
Framework in Cybersecurity Practice
⚠️
Existential Risk
arxiv.org
·
2d
How to Keep Your Brain
Sharp
: A Practical Playbook Beyond the
Basics
⚠️
Existential Risk
tim.blog
·
6d
·
Hacker News
The Ethical Knowledge Gap:
Dispersed
Knowledge,
Sensemaking
Failures, and Epistemic Dependence
🤔
Philosophy of Tech
arxiv.org
·
2d
Children's Online Safety Risks and Ethical
Considerations
in
XR
Games
⚠️
Existential Risk
arxiv.org
·
2d
Evaluating
whether AI models would
sabotage
AI safety research
🛡️
AI Safety
arxiv.org
·
2d
TSAssistant
: A
Human-in-the-Loop
Agentic Framework for Automated Target Safety Assessment
🕹️
Agentic AI
arxiv.org
·
2d
RADIANT-LLM
: an Agentic Retrieval
Augmented
Generation Framework for Reliable Decision Support in Safety-Critical Nuclear Engineering
📚
RAG
arxiv.org
·
2d
The
Wisdom
of the Crowd and Higher-Order
Beliefs
♟️
Game Theory
arxiv.org
·
1d
Plausible
but Wrong: A case study on Agentic Failures in
Astrophysical
Workflows
🕵️
AI Agents
arxiv.org
·
1d
Analyzing LLM Reasoning to
Uncover
Mental Health
Stigma
💭
Reasoning Models
arxiv.org
·
1d
One Size Fits
None
:
Heuristic
Collapse in LLM Investment Advice
🪄
Prompt Engineering
arxiv.org
·
2d
What People See (and Miss) About Generative AI Risks:
Perceptions
of
Failures
, Risks, and Who Should Address Them
🛡️
AI Safety
arxiv.org
·
3d
Designing escalation
criteria
for international AI incident response:
criteria
, triggers, and
thresholds
⚖️
AI Governance
arxiv.org
·
2d
How Researchers
Navigate
Accountability, Transparency, and Trust When Using AI Tools in Early-Stage Research: A
Think-Aloud
Study
📰
Content Curation
arxiv.org
·
2d
Propensity
Inference: Environmental
Contributors
to LLM Behaviour
🤖
LLM
arxiv.org
·
6d
« Page 1
·
Page 3 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help