Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
💾 CPU Caching
L1/L2/L3, Cache Lines, False Sharing, Alignment
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
28955
posts in
61.5
ms
When
Choices
Become Risks: Safety Failures of Large Language Models under Multiple-Choice
Constraints
🛡️
AI Safety
arxiv.org
·
4d
On the
Rejection
Criterion
for Proxy-based Test-time Alignment
🚩
CTF Writeups
arxiv.org
·
5d
SafeAnchor
: Preventing
Cumulative
Safety Erosion in Continual Domain Adaptation of Large Language Models
🛡️
AI Safety
arxiv.org
·
4d
Evolutionary Negative Module Pruning for Better
LoRA
Merging
🧠
LLM Inference
arxiv.org
·
4d
Demystifying
the
unreasonable
effectiveness of online alignment methods
🔍
Vector Search Algorithms
arxiv.org
·
4d
Spec2Cov
: An Agentic Framework for Code Coverage Closure of Digital Hardware
Designs
💻
Coding Agents
arxiv.org
·
5d
CGCMA
:
Conditionally-Gated
Cross-Modal Attention for Event-Conditioned Asynchronous Fusion
🧠
LLM Inference
arxiv.org
·
4d
An
Empirical
Study of Multi-Generation Sampling for
Jailbreak
Detection in Large Language Models
🔤
Tokenization
arxiv.org
·
3d
Guardrails in
Logit
Space: Safety Token
Regularization
for LLM Alignment
🛡️
AI Safety
arxiv.org
·
4d
Self-Improving
Tabular
Language Models via
Iterative
Group Alignment
📊
Vector Databases
arxiv.org
·
3d
Towards Robust
Endogenous
Reasoning: Unifying Drift Adaptation in
Non-Stationary
Tuning
🧠
LLM Inference
arxiv.org
·
5d
WISV
: Wireless-Informed Semantic Verification for Distributed
Speculative
Decoding in Device-Edge LLM Inference
🧠
LLM Inference
arxiv.org
·
4d
A
Systematic
Study of Training-Free Methods for
Trustworthy
Large Language Models
🧠
LLM Inference
arxiv.org
·
5d
Concept-wise Attention for
Fine-grained
Concept
Bottleneck
Models
🧠
LLM Inference
arxiv.org
·
5d
Dual Alignment Between Language Model
Layers
and Human
Sentence
Processing
🔤
Tokenization
arxiv.org
·
4d
Cat-DPO
:
Category-Adaptive
Safety Alignment
🛡️
AI Safety
arxiv.org
·
4d
S2H-DPO
: Hardness-Aware Preference Optimization for Vision-Language Models
📊
Embeddings
arxiv.org
·
4d
Bridging the Gap between User
Intent
and LLM: A
Requirement
Alignment Approach for Code Generation
🪄
Prompt Engineering
arxiv.org
·
5d
SaFeR-Steer: Evolving Multi-Turn
MLLMs
via Synthetic
Bootstrapping
and Feedback Dynamics
🛡️
AI Safety
arxiv.org
·
4d
Into the Gray Zone: Domain
Contexts
Can
Blur
LLM Safety Boundaries
🛡️
AI Safety
arxiv.org
·
5d
« Page 1
·
Page 3 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help