Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
💾 CPU Caching
L1/L2/L3, Cache Lines, False Sharing, Alignment
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
29116
posts in
70.4
ms
Pruning
Unsafe
Tickets: A Resource-Efficient Framework for Safer and More Robust LLMs
🕳
LLM Vulnerabilities
arxiv.org
·
5d
Machine learning-driven alignment architecture of heterogeneous data with
transient
varying
semantics
🏗️
LLM Infrastructure
nature.com
·
2d
easyaligner
: Forced
Alignment
Made Easy
🔤
Tokenization
kb-labb.github.io
·
6d
Takes on
Automating
Alignment
👨💻
AI Coding
lesswrong.com
·
4d
polish by
pbakaus/impeccable
🎨
Design Tokens
skills.sh
·
3d
CAP:
Controllable
Alignment Prompting for
Unlearning
in LLMs
🪄
Prompt Engineering
arxiv.org
·
1d
Alignment
by
Default
?
🛡️
AI Safety
blog.cosmos-institute.org
·
4d
·
Hacker News
ReCAPA
: Hierarchical Predictive Correction to Mitigate
Cascading
Failures
🔄
Incremental Computation
arxiv.org
·
1d
Monday AI
Radar
#22
🆕
New AI
lesswrong.com
·
3d
Value-Conflict
Diagnostics
Reveal Widespread Alignment
Faking
in Language Models
🛡️
AI Safety
arxiv.org
·
1d
From Noise to Intent:
Anchoring
Generative
VLA
Policies with Residual Bridges
🧠
LLM Inference
arxiv.org
·
1d
Alignment
has a
Fantasia
Problem
🪄
Prompt Engineering
arxiv.org
·
1d
MGDA-Decoupled
: Geometry-Aware Multi-Objective Optimisation for
DPO-based
LLM Alignment
⚡
PGO
arxiv.org
·
2d
Sensitivity
Uncertainty
Alignment
in Large Language Models
🔤
Tokenization
arxiv.org
·
1d
VG-CoT
: Towards Trustworthy Visual Reasoning via Grounded Chain-of-Thought
✨
Gemini
arxiv.org
·
1d
C-Mining: Unsupervised Discovery of
Seeds
for Cultural Data Synthesis via Geometric
Misalignment
🐘
pgvector
arxiv.org
·
5d
ONOTE
: Benchmarking
Omnimodal
Notation Processing for Expert-level Music Intelligence
✨
Gemini
arxiv.org
·
2d
Harmful Intent as a
Geometrically
Recoverable
Feature of LLM Residual Streams
🛡️
AI Safety
arxiv.org
·
3d
Continual
Safety Alignment via Gradient-Based
Sample
Selection
🛡️
AI Safety
arxiv.org
·
4d
When
Choices
Become Risks: Safety Failures of Large Language Models under Multiple-Choice
Constraints
🛡️
AI Safety
arxiv.org
·
4d
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help