Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
🕵️ Investigative OSINT
Specific
Bellingcat Methods, Open Source Investigation, Verification
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
200838
posts in
26.2
ms
MACAA
: Belief-Revision Multi-Agent Reasoning for Open-World Code
Authorship
Verification
💻
Coding Agents
arxiv.org
·
2d
Harness-first agentic
SDLC
: How
OpenSearch
builds software using its own search engine
💻
Coding Agents
opensearch.org
·
4h
The
Goblin
in the Machine: What OpenAI’s “
No-Pigeon
Rule” Teaches Lawyers About AI Hallucinations
🛡️
AI Security
malware.news
·
3d
Designing
AI claims for
Verification
✅
Document Verification
lesswrong.com
·
4h
Building Safe
LangChain
Agents with
Scope
Verification
🧠
Context Engineering
scopegate.ai
·
2d
·
DEV
Autonomous
Systems &
Reasoning
Research
⚙️
AI Automation
cli.narelabs.com
·
6d
·
Hacker News
OCR
can
extract
a fake document perfectly. What should catch the fraud?
✅
Document Verification
turbolens.io
·
1d
·
r/SideProject
Legal Document
Plagiarism
Detection: Contracts,
NDAs
, and Intellectual Property Briefs
🔍
AI Detection
hub.paper-checker.com
·
2d
Automatic Detection of Reference
Counting
Bugs
in Linux Kernel Drivers
🔍
Memory Profilers
arxiv.org
·
23h
From Articles to
Premises
: Building
PrimeFacts
, an Extraction Methodology and Resource for Fact-Checking Evidence
📚
Digital Humanities
arxiv.org
·
6d
Quantifiable
Uncertainty: A Stochastic
Consensus
Multi-Agent RAG Framework for Robust Malware Detection
🔍
Detection Engineering
arxiv.org
·
2d
Field-Localized
Forgery
Detection for Digital Identity Documents
🖼️
JPEG Forensics
arxiv.org
·
2d
VNN-LIB
2.0:
Rigorous
Foundations for Neural Network Verification
📐
Vector Databases
arxiv.org
·
3d
Arcane: An
Assertion
Reduction Framework through Semantic Clustering and
MCTS-Guided
Rule Exploring
✅
Format Verification
arxiv.org
·
2d
Chain of Risk: Safety Failures in Large Reasoning Models and
Mitigation
via Adaptive
Multi-Principle
Steering
🛡️
AI Safety
arxiv.org
·
6d
When Agents
Overtrust
Environmental Evidence: An
Extensible
Agentic Framework for Benchmarking Evidence-Grounding Defects in LLM Agents
🧠
Context Engineering
arxiv.org
·
2d
Towards
Backdoor-Based
Ownership
Verification for Vision-Language-Action Models
🛡️
AI Security
arxiv.org
·
2d
CIVeX
:
Causal
Intervention Verification for Language Agents
✨
Effect Handlers
arxiv.org
·
2d
Research on Security
Enhancement
Methods for
Adversarial
Robust Large Language Model Intelligent Agents for Medical Decision-Making Tasks
🛡️
AI Security
arxiv.org
·
2d
Containment
Verification: AI Safety
Guarantees
Independent of Alignment
🔒
Agentic Safety
arxiv.org
·
2d
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help