Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
💉 Prompt Injection
Specific
Prompt injection attacks on LLMs
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
7397
posts in
15.4
ms
AgentVisor
: Defending LLM Agents Against Prompt Injection via Semantic
Virtualization
🕳
LLM Vulnerabilities
arxiv.org
·
4d
airlock
: AI Trust as a Variable - A
Cryptographic
Protocol for Runtime Identity Verification
🛡️
AI Security
zenodo.org
·
1d
·
Hacker News
ZetaLib/The
Gay
Jailbreak/The
Gay
Jailbreak.md
at main
📟
Terminals
github.com
·
17h
·
Hacker News
,
r/ChatGPT
The
Agentic
AI Security Company
🔧
Agent Tooling
straiker.ai
·
5d
·
Hacker News
Indirect Prompt Injection in the Wild: An Empirical Study of
Prevalence
, Techniques, and
Objectives
🕳
LLM Vulnerabilities
arxiv.org
·
1d
Jailbreaking
a robot vacuum to run Tailscale and
Valetudo
🔌
Embedded Systems
tailscale.com
·
6d
·
Hacker News
FlashRT
: Towards
Computationally
and Memory Efficient Red-Teaming for Prompt Injection and Knowledge Corruption
🕳
LLM Vulnerabilities
arxiv.org
·
1d
SafeReview
:
Defending
LLM-based Review Systems Against Adversarial Hidden Prompts
🕳
LLM Vulnerabilities
arxiv.org
·
2d
One Word at a Time:
Incremental
Completion
Decomposition
Breaks LLM Safety
🤖
LLM
arxiv.org
·
2d
Evaluation of Prompt
Injection
Defenses
in Large Language Models
🛡️
AI Security
arxiv.org
·
4d
Latent Adversarial Detection: Adaptive
Probing
of LLM
Activations
for Multi-Turn Attack Detection
🛡️
AI Security
arxiv.org
·
1d
Adaptive Prompt
Embedding
Optimization for LLM
Jailbreaking
🪄
Prompt Engineering
arxiv.org
·
3d
Dynamic Adversarial Fine-Tuning
Reorganizes
Refusal
Geometry
🛡️
AI Security
arxiv.org
·
1d
RouteGuard
: Internal-Signal Detection of Skill
Poisoning
in LLM Agents
🕳
LLM Vulnerabilities
arxiv.org
·
4d
SnapGuard
: Lightweight Prompt Injection Detection for
Screenshot-Based
Web Agents
🕷️
Web Crawling
arxiv.org
·
3d
Mechanistic
Steering
of LLMs Reveals Layer-wise Feature Vulnerabilities in Adversarial Settings
🛡️
AI Security
arxiv.org
·
4d
Ghost
in the Agent:
Redefining
Information Flow Tracking for LLM Agents
🛡️
AI Security
arxiv.org
·
4d
From
Stateless
Queries to Autonomous Actions: A
Layered
Security Framework for Agentic AI Systems
🛡️
AI Security
arxiv.org
·
4d
Cross-Lingual
Jailbreak Detection via Semantic
Codebooks
🔐
Hardware Security
arxiv.org
·
3d
One
Perturbation
, Two Failure Modes: Probing VLM Safety via Embedding-Guided
Typographic
Perturbations
🪝
eBPF
arxiv.org
·
3d
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help