Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
👁️ Attention Optimization
Flash Attention, Memory Efficient, Sparse Attention, Transformers
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
112514
posts in
823.5
ms
LUCID
: Attention with
Preconditioned
Representations
arxiv.org
·
2d
🧩
Attention Kernels
Learning to Forget Attention: Memory
Consolidation
for Adaptive
Compute
Reduction
arxiv.org
·
1d
⚡
Flash Attention
Haar
Cascades
to YOLO: Face Detection Migration Guide
dev.to
·
2h
·
Discuss:
DEV
🧩
Attention Kernels
Ultrafast visual perception beyond human capabilities enabled by motion analysis using
synaptic
transistors
nature.com
·
1d
·
Discuss:
r/compsci
⚡
Flash Attention
GPU-Serving
Two-Tower
Models for Lightweight Ads Engagement Prediction
medium.com
·
1d
⚡
Flash Attention
How low-bit
inference
enables
efficient AI
dropbox.tech
·
12h
·
Discuss:
Hacker News
🎯
Tensor Cores
The 4 Flash Attention
Variants
: How to Train
Transformers
10× Longer Without Running Out of Memory
pub.towardsai.net
·
6d
⚡
Flash Attention
Index Exchange
embeds
AI attention signals into
SSP
for pre-bid targeting
google.com
·
1d
⚡
Flash Attention
A Neural Network
Playground
playground.tensorflow.org
·
4h
🧮
cuDNN
The
batch
size
in training
breno.bearblog.dev
·
9h
📊
Gradient Accumulation
TsinghuaC3I/Awesome-Memory-for-Agents
: A Collection of
Papers
about Memory for Language Agents
github.com
·
3h
💡
LSP
NEURAL ARCHITECTURE: The
Dawn
of Direct Cognitive Programming and the Question of Human
Sovereignty
medium.com
·
9h
⚡
Flash Attention
Decodability
, sensitivity, and
criticality
measured through single-neuron perturbations
nature.com
·
1d
⚡
Flash Attention
Decoding active sites in
high-entropy
catalysts
via attention-enhanced model
science.org
·
1d
🧩
Attention Kernels
Smart
Screenshot
Mockups
trendhunter.com
·
8h
⚡
Flash Attention
Stroke of Surprise: Progressive Semantic
Illusions
in Vector
Sketching
stroke-of-surprise.github.io
·
17h
🧩
Attention Kernels
87.4% of Online
Courses
Never Get
Finished
. Here's Why (And What I Built to Fix It)
learnoptima.online
·
2h
·
Discuss:
DEV
🤖
AI Coding Tools
Multitask
Learning on Medical Data
pub.towardsai.net
·
3h
🏎️
TensorRT
Presentation: Building
Embedding
Models for Large-Scale Real-World
Applications
infoq.com
·
1d
🎓
Model Distillation
Optimal
timing
for
superintelligence
feeds.feedblitz.com
·
1d
⚡
Flash Attention
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help