Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🕳 LLM Vulnerabilities
Hacking LLMs, Prompt Injection
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
27178
posts in
942.1
ms
Keeping you safe from
mylar
2026-02-11 23:58:55.681683+01
flutterby.com
·
38m
🔐
Hardware Security
Time to work on
SSL
robertismo.com
·
1d
🔍
Quickwit
Thread by @
HackingButLegal
on Thread
Reader
App
threadreaderapp.com
·
1h
💧
Litestream
Rethinking Latency
Denial-of-Service
:
Attacking
the LLM Serving Framework, Not the Model
arxiv.org
·
1d
💉
Prompt Injection
Fully Countering
Trusting
Trust through Diverse Double-Compiling (
DDC
) - Countering Trojan Horse attacks on Compilers
dwheeler.com
·
10h
💉
Prompt Injection
BREAKING: LLM “reasoning” continues to be
deeply
flawed
garymarcus.substack.com
·
1d
·
Discuss:
Substack
🏆
LLM Benchmarking
On Meta-Level Adversarial
Evaluations
of (White-Box) Alignment
Auditing
lesswrong.com
·
1d
🛡️
AI Security
remote
locks
and
distributed
locks
tautik.me
·
8h
🔓
Lock-Free Structures
Links 11/02/2026:
Fentanylware
(
CheeTok
) for ICE, Jimmy Lai Shows Journalism Became 'Crime' in Hong Kong
techrights.org
·
5h
📰
RSS Reading Practices
AI connector for Google Calendar makes
convenient
malware
launchpad
, researchers show
theregister.com
·
23h
·
Discuss:
Hacker News
🕸️
WebAssembly System Interface
Show HN: Protect Against Prompt
Injection
in
OpenClaw
npmjs.com
·
6h
·
Discuss:
Hacker News
💉
Prompt Injection
[
TUHS
] bare m4 (was BTL
summmer
employees)
tuhs.org
·
7h
·
Discuss:
Lobsters
⚙
Rust Macros
AgentSys
: Secure and Dynamic LLM Agents Through
Explicit
Hierarchical Memory Management
arxiv.org
·
1d
💉
Prompt Injection
How
Fluid
Reads Source
VMs
Without Breaking Anything
fluid.sh
·
19h
💉
Prompt Injection
Mastering
Authentication
in MCP: An AI Engineer’s Comprehensive Guide
pub.towardsai.net
·
2d
💉
Prompt Injection
Safety
mechanisms
of AI models more
fragile
than expected
techzine.eu
·
1d
🛡️
AI Safety
Langfuse
- Open Source LLM Engineering Platform
langfuse.com
·
1d
🦙
Ollama
Monitor
Jailbreaking
:
Evading
Chain-of-Thought Monitoring Without
lesswrong.com
·
6h
💉
Prompt Injection
Code is dying.... but not because AI
writes
it- because LLMs
simply
won't need it!
threadreaderapp.com
·
5h
🪄
Prompt Engineering
LLMs
Refuse
High-Cost Attacks but Stay
Vulnerable
to Cheap, Real-World Harm
expectedharm.github.io
·
1d
·
Discuss:
Hacker News
💉
Prompt Injection
Loading...
Loading more...
« Page 1
•
Page 3 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help