Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
馃 LLM
Specific
Large Language Models, Transformers, GPT, Language AI
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
204972
posts in
24.3
ms
An
Interpretable
and Scalable Framework for
Evaluating
Large Language Models
聽
馃挰
LLMs
arxiv.org
路
6d
On
Predicting
the Post-training Potential of
Pre-trained
LLMs
聽
馃挰
LLMs
arxiv.org
路
4d
Beyond
LoRA
vs. Full Fine-Tuning: Gradient-Guided
Optimizer
Routing for LLM Adaptation
聽
馃挰
LLMs
arxiv.org
路
6d
LoopUS
: Recasting Pretrained LLMs into
Looped
Latent Refinement Models
聽
馃挰
LLMs
arxiv.org
路
4d
An LLM-Based System for
Argument
Reconstruction
聽
馃挰
LLMs
arxiv.org
路
3d
Data Difficulty and the Generalization--
Extrapolation
Tradeoff
in LLM Fine-Tuning
聽
馃挰
LLMs
arxiv.org
路
3d
Interactive
Critique-Revision
Training for Reliable Structured LLM Generation
聽
馃挰
LLMs
arxiv.org
路
5d
Can Language Models
Analyze
Data? Evaluating Large Language Models for Question Answering over
Datasets
聽
馃挰
LLMs
arxiv.org
路
5d
Curriculum
Learning-Guided
Progressive
Distillation in Large Language Models
聽
馃挰
LLMs
arxiv.org
路
4d
Continual
Fine-Tuning
of Large Language Models via Program Memory
聽
馃挰
LLMs
arxiv.org
路
3d
LLiMba
:
Sardinian
on a Single GPU -- Adapting a 3B Language Model to a Vanishing Romance Language
聽
馃挰
LLMs
arxiv.org
路
5d
SLASH
the Sink:
Sharpening
Structural Attention Inside LLMs
聽
馃挰
LLMs
arxiv.org
路
5d
Latent
Chain-of-Thought
Improves
Structured-Data Transformers
聽
馃挰
LLMs
arxiv.org
路
4d
Valid
Best-Model Identification for LLM Evaluation via Low-Rank
Factorization
聽
馃挰
LLMs
arxiv.org
路
5d
A
Single-Layer
Model Can Do Language
Modeling
聽
馃挰
LLMs
arxiv.org
路
5d
Learning, Fast and Slow: Towards LLMs That
Adapt
Continually
聽
馃挰
LLMs
arxiv.org
路
4d
路
r/MachineLearning
Correcting
Influence: Unboxing LLM Outputs with
Orthogonal
Latent Spaces
聽
馃挰
LLMs
arxiv.org
路
3d
Navigating LLM Valley: From
AdamW
to Memory-Efficient and Matrix-Based
Optimizers
聽
馃挰
LLMs
arxiv.org
路
5d
Variational
Linear Attention: Stable
Associative
Memory for Long-Context Transformers
聽
馃挰
LLMs
arxiv.org
路
4d
Task-Adaptive
Embedding
Refinement
via Test-time LLM Guidance
聽
馃挰
LLMs
arxiv.org
路
4d
« Page 1
路
Page 3 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help