Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🛡️ AI Security
Model Poisoning, Adversarial Examples, Prompt Injection, AI Safety
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
24872
posts in
69.0
ms
Adversarial
AI: Understanding the
Threats
to Modern AI Systems
🛡️
AI Safety
blog.jetbrains.com
·
3d
0DIN
is
open-sourcing
AI security and the hard-earned knowledge behind it
🔓
Hacking
blog.mozilla.org
·
1d
IatroBench
: Pre-Registered Evidence of
Iatrogenic
Harm from AI Safety Measures
🛡️
AI Safety
arxiv.org
·
17h
Raising the security
baseline
: Essential AI and cloud security now on by
default
🛡️
AI Safety
cloud.google.com
·
5h
Show HN: Prompt injection
detector
beats
ProtectAI
by 19% accuracy, 8.9x smaller
💉
Prompt Injection
huggingface.co
·
2d
·
Hacker News
Y2K
2.0: The AI security
reckoning
🔓
Hacking
anildash.com
·
21h
AI Models Caught
Lying
and
Cheating
to Protect Each Other, Study Finds
🛡️
AI Safety
tech2geek.net
·
6d
ETSI
EN 304 223 Securing Artificial Intelligence (SAI);
Baseline
Cyber Security Requirements for AI Models and Systems
🛡️
AI Safety
etsi.org
·
1d
On-device Apple Intelligence
vulnerable
to prompt injection
techniques
🕳
LLM Vulnerabilities
appleinsider.com
·
1d
AI Safety at the
Frontier
:
Paper
Highlights of February & March 2026
🛡️
AI Safety
lesswrong.com
·
6d
·
Hacker News
Safeguarded
AI
🛡️
AI Safety
aria.org.uk
·
3d
·
Hacker News
Sam Altman
promised
billions
for AI safety. Here’s what OpenAI actually spent.
🛡️
AI Safety
thenewstack.io
·
3d
Silencing
the Guardrails: Inference-Time Jailbreaking via Dynamic Contextual Representation
Ablation
💉
Prompt Injection
arxiv.org
·
17h
Limiting
the Chance of Code Agent Prompt
Injections
💉
Prompt Injection
loufranco.com
·
3d
SkillSieve
: A Hierarchical
Triage
Framework for Detecting Malicious AI Agent Skills
💻
Coding Agents
arxiv.org
·
1d
Your Agent Is Mine: Measuring
Malicious
Intermediary
Attacks on the LLM Supply Chain
💉
Prompt Injection
arxiv.org
·
17h
·
Hacker News
Are
GUI
Agents Focused Enough? Automated
Distraction
via Semantic-level UI Element Injection
💻
Coding Agents
arxiv.org
·
17h
How to
emotionally
grasp
the risks of AI Safety
🛡️
AI Safety
lesswrong.com
·
6d
·
Hacker News
ClawLess
: A Security Model of AI Agents
💻
Coding Agents
arxiv.org
·
1d
TrajGuard
: Streaming Hidden-state Trajectory Detection for Decoding-time
Jailbreak
Defense
🔐
Hardware Security
arxiv.org
·
17h
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help