Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
馃 Transformer Architecture
Specific
Attention, BERT, GPT, Sequence Models
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
146550
posts in
9.4
ms
馃
Bidirectional
Encoder Representations from Transformers (
BERT
)
聽
馃敆
RAG
medium.com
路
4d
Task
Bert
聽
馃摑
TextRank
producthunt.com
路
16h
Automated
Attention
Pattern
Discovery at Scale in Large Language Models
聽
馃敆
RAG
arxiv.org
路
2d
tmaselko/paper-attncap
: Repository associated with the "Separate and Amplify: Attention's Geometry of Retrieval" paper. Contains TSAR synthetic task, minimal model, training/repro code, and chart/table generation.
聽
馃敘
Kolmogorov Complexity
github.com
路
19h
路
Hacker News
Attention
: The Secret
Superpower
Inside Every AI
聽
馃
Information Foraging
medium.com
路
1d
Building
GPT
from
Scratch
聽
馃敘
Kolmogorov Complexity
medium.com
路
1d
Team Fusion@ SU@
BC8
SympTEMIST
track: transformer-based approach for symptom recognition and linking
聽
馃攳
Vector Search
arxiv.org
路
5h
RPNT
: Robust Pre-trained Neural Transformer -- A
Pathway
for Generalized Motor Decoding
聽
馃挰
Prompt Engineering
arxiv.org
路
6d
Transformer See, Transformer Do:
Copying
as an Intermediate Step in Learning
Analogical
Reasoning
聽
馃挰
Prompt Engineering
arxiv.org
路
5h
LAG-XAI: A Lie-Inspired
Affine
Geometric Framework for Interpretable
Paraphrasing
in Transformer Latent Spaces
聽
馃敘
Kolmogorov Complexity
arxiv.org
路
1d
DDCL-INCRT
: A Self-Organising Transformer with Hierarchical Prototype Structure (Theoretical Foundations)
聽
馃
Symbolic AI
arxiv.org
路
6d
On the Geometry of
Positional
Encodings
in Transformers
聽
馃敘
Kolmogorov Complexity
arxiv.org
路
1d
Attention Editing: A
Versatile
Framework for Cross-Architecture Attention
Conversion
聽
馃
Deep Learning
arxiv.org
路
1d
Attention Mechanisms Through the Lens of
Numerical
Methods: Approximation Methods and Alternative
Formulations
聽
馃攧
Systems Thinking
arxiv.org
路
6d
Brain-to-Speech:
Prosody
Feature Engineering and Transformer-Based
Reconstruction
聽
鉁傦笍
Tokenization
arxiv.org
路
1d
Linguistic
Frameworks Go Toe-to-Toe at
Neuro-Symbolic
Language Modeling
聽
馃
Symbolic AI
arxiv.org
路
3d
Vintix
II: Decision Pre-Trained Transformer is a Scalable In-Context Reinforcement
Learner
聽
馃挰
Prompt Engineering
arxiv.org
路
1d
PoM
: A Linear-Time Replacement for Attention with the Polynomial
Mixer
聽
馃敘
Kolmogorov Complexity
arxiv.org
路
1d
S0
Tuning: Zero-Overhead Adaptation of Hybrid
Recurrent-Attention
Models
聽
馃挰
Prompt Engineering
arxiv.org
路
6d
Training Transformers in
Cosine
Coefficient
Space
聽
馃攳
Vector Search
arxiv.org
路
2d
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help