Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
🗣️ LLMs
Specific
Large Language Models, GPT, Transformers, Inference
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
203907
posts in
27.7
ms
Training-Inference
Consistent
Segmented
Execution for Long-Context LLMs
🧠
LLM
arxiv.org
·
3d
How LLM
Inference
Works
🧠
LLM
arpitbhayani.me
·
2d
·
Hacker News
A deep
dive
into the
Transformer
architecture
🧠
LLM
blog.algomaster.io
·
2d
Understanding
LLMOps
: Navigating the
waters
of large language models
🧠
LLM
mlops.community
·
3d
Reinforcing
Recursive
Language Models (18 minute read)
🧠
LLM
alphaxiv.org
·
3d
·
Hacker News
AI Paper Review: Language Models are Unsupervised
Multitask
Learners
(GPT-2)
🧠
LLM
freecodecamp.org
·
4d
NLP
· Machine Learning
💬
Natural Language Processing
medium.com
·
3d
A Hierarchical Language Model with
Predictable
Scaling Laws and
Provable
Benefits of Reasoning
🧠
LLM
arxiv.org
·
2d
Non-linear
Interventions
on Large Language Models
🧠
LLM
arxiv.org
·
1d
Stress-Testing the Reasoning
Competence
of LLMs With Proofs Under Minimal
Formalism
🧠
LLM
arxiv.org
·
2d
Correct Answers from Sound Reasoning:
Verifiable
Process
Supervision
for Language Models
🧠
LLM
arxiv.org
·
2d
Enhanced
and Efficient
Reasoning
in Large Learning Models
🧠
LLM
arxiv.org
·
1d
Layer-wise Representation Dynamics: An
Empirical
Investigation Across
Embedders
and Base LLMs
🧠
LLM
arxiv.org
·
2d
Continual
Fine-Tuning
of Large Language Models via Program Memory
🧠
LLM
arxiv.org
·
2d
An
Interpretable
and Scalable Framework for
Evaluating
Large Language Models
🧠
LLM
arxiv.org
·
5d
Correcting
Influence: Unboxing LLM Outputs with
Orthogonal
Latent Spaces
🧠
LLM
arxiv.org
·
2d
Continuous
Latent
Contexts
Enable Efficient Online Learning in Transformers
🧠
LLM
arxiv.org
·
4d
An LLM-Based System for
Argument
Reconstruction
🧠
LLM
arxiv.org
·
2d
Distribution
Corrected
Offline Data
Distillation
for Large Language Models
🧠
LLM
arxiv.org
·
1d
Memory-Efficient
Looped
Transformer:
Decoupling
Compute from Memory in
Looped
Language Models
🧠
LLM
arxiv.org
·
5d
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help