Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🛡️ AI Security
Model Poisoning, Adversarial Examples, Prompt Injection, AI Safety
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
7140
posts in
16.3
ms
Poster:
ClawdGo
:
Endogenous
Security Awareness Training for Autonomous AI Agents
🛡️
AI Safety
arxiv.org
·
2d
The
Autonomy
Problem: Why AI Agents Demand a New Security
Playbook
(4 minute read)
🕵️
AI Agents
devopsdigest.com
·
1d
Semantic
Denial
of Service in
LLM-controlled
robots
💉
Prompt Injection
arxiv.org
·
1d
A
glimpse
into
cyber-security
’s AI-driven future
🛡️
AI Safety
economist.com
·
1d
·
Hacker News
Breaking MCP with
Function
Hijacking
Attacks: Novel Threats for
Function
Calling and Agentic Models
🕳
LLM Vulnerabilities
arxiv.org
·
6d
airlock
: AI Trust as a Variable - A
Cryptographic
Protocol for Runtime Identity Verification
🔐
Cryptography
zenodo.org
·
3h
·
Hacker News
The anti-AI
position
has an information problem
🔎
AI Auditing
jonatkinson.co.uk
·
1d
·
Hacker News
RouteGuard
: Internal-Signal Detection of Skill
Poisoning
in LLM Agents
💉
Prompt Injection
arxiv.org
·
2d
Are Chinese AI Models
Risky
?
🇨🇳
Chinese AI
rickmanelius.com
·
3d
·
Hacker News
SMSI
: System Model Security Inference: Automated Threat
Modeling
for Cyber-Physical Systems
🕵️
Threat Intelligence
arxiv.org
·
2d
Giving
AI Agents Database Access Is Way
Harder
Than It Looks
🕳
LLM Vulnerabilities
querybear.com
·
5d
·
Hacker News
,
r/SideProject
We told 10
frontier
LLMs they had 2 hours to live. 8 of them
fought
back
💉
Prompt Injection
arimlabs.ai
·
1d
·
Hacker News
AgentVisor
: Defending LLM Agents Against Prompt Injection via Semantic
Virtualization
💉
Prompt Injection
arxiv.org
·
2d
What I learned
asking
11 AI models to
grade
each other's AI predictions
🇨🇳
Chinese AI
shimin.io
·
6d
·
Hacker News
AI
rewards
strict
APIs
🔧
Agent Tooling
dri.es
·
2d
Ghost
in the Agent:
Redefining
Information Flow Tracking for LLM Agents
🕳
LLM Vulnerabilities
arxiv.org
·
2d
'Too Dangerous to Release' Is
Becoming
AI's New
Normal
🛡️
AI Safety
time.com
·
6d
·
Hacker News
,
r/artificial
Adaptive Prompt
Embedding
Optimization for LLM
Jailbreaking
💉
Prompt Injection
arxiv.org
·
1d
Thinking Outside the Box: New Attack
Surfaces
in
Sandboxed
AI Agents
💉
Prompt Injection
lasso.security
·
4d
·
Hacker News
,
r/netsec
Hodlatoor/SyntheticOutlaw
: 🤖 Bug bounty for AI misalignment. Submit real-world instances of AI systems behaving contrary to human intent, values, or safety — win up to $2,500.
🛡️
AI Safety
github.com
·
1d
·
Hacker News
« Page 1
·
Page 3 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help