Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🤖 Transformer Architecture
Specific
Attention, BERT, GPT, Sequence Models
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
17492
posts in
38.4
ms
How
Transformers
Power LLMs: Step-by-Step Guide
🔤
Tokenization
analyticsvidhya.com
·
6d
·
…
Understanding Attention Mechanisms – Part 4: Turning
Similarity
Scores into Attention
Weights
🎯
RLHF
dev.to
·
1d
·
DEV
·
…
Scaling seismic foundation models on AWS: Distributed training with Amazon
SageMaker
HyperPod
and expanding context windows
☁️
Hyperscaler Infra
aws.amazon.com
·
1h
·
…
OrionsLock/SALOMI
: Research code for extreme low-bit transformer quantization and inference.
🤖
LLM Inference
github.com
·
11h
·
Hacker News
·
…
AI by Hand Library ~ Attention, MHA,
MQA
,
GQA
🤖
AI Tools
byhand.ai
·
2d
·
…
We
reverse-engineered
KAIROS
from the Claude Code leak. Here's the open version.
📋
AGENTS.md
cathedral-ai.com
·
3h
·
DEV
·
…
Training a
Transformer
with
1970s-era
Technology
🤖
Transformers
hackaday.com
·
3d
·
…
The
Internalization
of Gradients: From
Prebiotic
Chemistry to Mesa-Optimizers
🔗
Network Effects
lesswrong.com
·
2d
·
…
not much
happened
today
✍️
Prompt Engineering
news.smol.ai
·
1d
·
…
BrainMaxxing
: the road less
traveled
in the age of AI
🔗
Neuroplasticity
startswithabang.substack.com
·
6d
·
Substack
·
…
ReCUBE
Benchmark Reveals GPT-5 Scores Only 37.6% on
Repository-Level
Code Generation
🤖
Large Language Models
gentic.news
·
3d
·
DEV
·
…
RBF
Attention Reveals Dot‑Product's Hidden
Norm
Bias
🤖
Machine Learning
dev.to
·
13h
·
DEV
·
…
Understanding Attention
Mechanisms
– Part 5: How Attention
Produces
the First Output
🧬
Cognitive Science
dev.to
·
18h
·
DEV
·
…
AI Safety Guide for TRUE
Beginners
by TRUE
begginers
🛡️
AI Safety
lesswrong.com
·
5d
·
…
Self
Attention
Flow
~ New Release!
🧘
Mindfulness
byhand.ai
·
5d
·
…
Context
Is All You Have: How LLM
Attention
Actually Works
🧠
Context Engineering
dev.to
·
1d
·
DEV
·
…
Understanding Attention Mechanisms – Part 2: Comparing
Encoder
and Decoder
Outputs
🤖
Transformers
dev.to
·
5d
·
DEV
·
…
Residual
Attention U-Net for Automated Multi-Class Segmentation of
COVID-19Chest
CT Images
🤖
Transformers
dev.to
·
6d
·
DEV
·
…
Self-improving
Coding Agents
✍️
Prompt Engineering
dev.to
·
5d
·
DEV
·
…
I Built an AI Resume
Analyzer
with GPT-5 vs
GPT-4o-mini
… and the Results Surprised Me 🚀
✍️
Prompt Engineering
dev.to
·
5d
·
DEV
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help