Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
🛡️ AI Security
Model Poisoning, Adversarial Examples, Prompt Injection, AI Safety
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
201935
posts in
95.9
ms
AI Security Evaluation: How to Test Prompt Injection, Data
Leakage
, and
Unsafe
Tool Calls
🛡️
AI Safety
medium.com
·
1d
Pen tests show AI security
flaws
far more severe than legacy software
bugs
🕳
LLM Vulnerabilities
csoonline.com
·
6d
rl
for red
teaming
: training models to attack and defend themselves
📈
Growth Hacking
castform.com
·
9h
·
Hacker News
IPI-proxy
: An
Intercepting
Proxy for Red-Teaming Web-Browsing AI Agents Against Indirect Prompt Injection
💉
Prompt Injection
arxiv.org
·
2d
AI Security
Lab
Hub
💉
Prompt Injection
arcanum-sec.github.io
·
4d
Researchers
detail
safety
gaps
in agentic AI systems
🛡️
AI Safety
kite.kagi.com
·
2d
Building the Solution Teams Need to Secure AI Against
Prompt
Injection
⚙️
LLMOps
techcommunity.microsoft.com
·
1d
How are you
handling
prompt injection across multi-step agent
workflows
?
🧠
Context Engineering
msukhareva.substack.com
·
6d
·
Substack
AI security is broken at runtime: Most
enterprises
don’t
realize
it yet
🤨
AI Criticism
techradar.com
·
1d
AI Security Architecture: The Key to
Verifiable
AI
🛡️
AI Safety
malware.news
·
3d
The Real AI Security Risk Isn't Data
Leakage
. It's What Your Agents Can Do
🛡️
AI Safety
forbes.com
·
3d
Sleeper
Channels and
Provenance
Gates: Persistent Prompt Injection in Always-on Autonomous AI Agents
🧠
Context Engineering
arxiv.org
·
1d
REALISTA
: Realistic Latent Adversarial Attacks that
Elicit
LLM Hallucinations
🏠
Local LLM Deployment
arxiv.org
·
1d
Position
: AI Security Policy Should Target Systems, Not Models
⚖️
AI Policy
arxiv.org
·
3d
Oracle Poisoning:
Corrupting
Knowledge Graphs to
Weaponise
AI Agent Reasoning
🤖
Artificial Intelligence
arxiv.org
·
3d
·
Hacker News
The Attacker in the Mirror: Breaking Self-Consistency in Safety via
Anchored
Bipolicy
Self-Play
🌊
Stream Ciphers
arxiv.org
·
3d
Evaluating Prompt Injection Defenses for Educational LLM
Tutors
:
Security-Usability-Latency
Trade-offs
🕳
LLM Vulnerabilities
arxiv.org
·
4d
SL5
Standard
for AI Security
🛡️
AI Safety
arxiv.org
·
3d
Containment
Verification: AI Safety
Guarantees
Independent of Alignment
🔒
Agentic Safety
arxiv.org
·
3d
"Training robust
watermarking
model may hurt authentication!'' Exploring and Mitigating the Identity
Leakage
in Robust
Watermarking
💧
Digital Watermarking
arxiv.org
·
3d
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help