Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
👁️ Attention Optimization
Flash Attention, Memory Efficient, Sparse Attention, Transformers
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
146517
posts in
17.1
ms
PoM
: A Linear-Time Replacement for Attention with the Polynomial
Mixer
🧩
Attention Kernels
arxiv.org
·
1d
Understanding
Positional
Embeddings in Transformers (with
Intuition
and Examples)
📉
Model Quantization
pub.towardsai.net
·
5d
SNN
brain-inspired gen-AI in C/C#, no external AI
libs
could be promising?
⚡
ONNX Runtime
news.ycombinator.com
·
22h
·
Hacker News
HiFi
Rose
RD160
D/A processor Specifications
⏱️
Benchmarking
stereophile.com
·
5h
Why
CNNs
Still Matter in 2026 (Even with Transformers
Everywhere
)
🏎️
TensorRT
medium.com
·
2d
Efficiency Comparison: Transformer Home’s Professional
Automatic
Transformer
Slitting
Line Solutions
⏱️
Benchmarking
einpresswire.com
·
9h
Mamba4
Explained: A Faster Alternative to Transformers for
Sequential
Modeling
📊
Gradient Accumulation
analyticsvidhya.com
·
6d
Liquid
Neural Networks: The Future of
Temporal
AI in 2024
🎓
Model Distillation
blogagent-production-d2b2.up.railway.app
·
2d
·
DEV
'Super Mario Galaxy's Box Office Milestone Reveals a Dark
Fate
for
Blockbuster
Movies
⚡
Flash Attention
movieweb.com
·
1h
🧠
Bidirectional
Encoder Representations from Transformers (
BERT
)
🎓
Model Distillation
medium.com
·
4d
New comment by
brentoakes025
in "Ask HN: Who wants to be
hired
? (April 2026)"
🐕
Ruff
drive.google.com
·
2d
·
Hacker News
Aspect
oriented data quality for
dataflows
🔬
Static Analysis
docs.tabsdata.com
·
1d
·
Hacker News
LLM
inference
engine from
scratch
in C++
🏎️
TensorRT
anirudhsathiya.com
·
3d
·
Hacker News
America’s AI Build-Out
Hinges
on Chinese
Electrical
Parts (Bloomberg)
🤖
AI Coding Tools
bloomberg.com
·
2d
·
Hacker News
30 Days of Building a Small Language Model
📉
Model Quantization
devopslearning.medium.com
·
5d
MICA: Multivariate
Infini
Compressive
Attention for Time Series Forecasting
🧩
Attention Kernels
arxiv.org
·
15h
Studio amp
done
right 💫
🏗️
Build Systems
twogoodears.blogspot.com
·
2d
·
Blogger
The Flash Attention
Backward
Pass Is the Part Nobody Explains. The final nail in the
coffin
⚡
Flash Attention
medium.com
·
6d
Top
Oriented
Silicon Steel
Manufacturers
Driving Innovation in the Steel Industry
⏱️
Benchmarking
einpresswire.com
·
13h
Toymaker
Hasbro
says it may take weeks to recover from cyberattack
🦀
Rust
malware.news
·
5d
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help