Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
🌀 Hallucination
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
200026
posts in
45.4
ms
On
Hallucinations
in Inverse Problems: Fundamental Limits and
Provable
Assessment Methods
🔮
Perplexity
arxiv.org
·
10h
Fogness
🔮
Perplexity
reeseappalachiantrail.bearblog.dev
·
6d
Perceptions
🌍
World Models
3quarksdaily.com
·
2d
The
Mirage
in the Machine: Decoding LLM
Hallucinations
🌍
World Models
medium.com
·
4d
Colored
Shadow
Penumbra
🎨
Rendering
chosker.github.io
·
6d
·
Hacker News
Dual-Pathway
Circuits of Object
Hallucination
in Vision-Language Models
🖼️
Image Generation
arxiv.org
·
10h
Rethinking Evaluation for LLM
Hallucination
Detection: A
Desiderata
, A New RAG-based Benchmark, New Insights
🏠
Local LLM Deployment
arxiv.org
·
1d
Where Does Reasoning Break? Step-Level
Hallucination
Detection via Hidden-State Transport
Geometry
🔮
Perplexity
arxiv.org
·
10h
When Looking Is Not Enough: Visual Attention Structure Reveals
Hallucination
in
MLLMs
🌍
World Models
arxiv.org
·
1d
PanoWorld
: Towards Spatial
Supersensing
in 360$^\circ$ Panorama World
✨
Gaussian Splatting
arxiv.org
·
10h
Max-pooling
Network Revisited: Analyzing the Role of Semantic Probability in Multiple Instance Learning for
Hallucination
Detection
🤖
Machine Learning
arxiv.org
·
2d
H-POPE: Hierarchical
Polling-based
Probing Evaluation of
Hallucinations
in Large Vision-Language Models
🎨
AI Image Generation
arxiv.org
·
2d
Hallucination Detection via
Activations
of Open-Weight Proxy
Analyzers
🔍
AI Detection
arxiv.org
·
3d
Sanity
Checks for Long-Form
Hallucination
Detection
🔎
AI Search
arxiv.org
·
2d
Scalable
Token-Level
Hallucination
Detection in Large Language Models
🤖
LLM
arxiv.org
·
1d
Hallucination
as an Anomaly: Dynamic Intervention via
Probabilistic
Circuits
🤖
Anthropic AI
arxiv.org
·
6d
Instruction Lens Score: Your Instruction
Contributes
a Powerful Object
Hallucination
Detector for Multimodal Large Language Models
🌍
World Models
arxiv.org
·
1d
Do Benchmarks
Underestimate
LLM Performance? Evaluating Hallucination Detection With LLM-First
Human-Adjudicated
Assessment
🤖
LLM
arxiv.org
·
2d
Attractor
Geometry of Transformer Memory: From Conflict
Arbitration
to Confident Hallucination
🌍
World Models
arxiv.org
·
6d
Noise-Started One-Step Real-World Super-Resolution via
LR-Conditioned
SplitMeanFlow
and GAN Refinement
📈
AI Upscaling
arxiv.org
·
2d
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help