Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🤖 LLM
Specific
Large Language Models, GPT, Claude, ChatGPT, Transformers
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
186542
posts in
19.6
ms
Language Generation in the
Limit
✨
LLMs
openreview.net
·
6d
RNN
to Transformer
NMT
: PyTorch Migration with 2.8x BLEU Gain
✨
LLMs
tildalice.io
·
6d
epscylonb/1386.ai.rocm
: A lightweight transformer language model built from scratch in PyTorch, trained on a single consumer GPU with a full pipeline for data processing, pretraining, and instruction tuning.
⚙️
MLOps
github.com
·
2d
·
Hacker News
Identifying the Achilles' Heel: An Iterative Method for
Dynamically
Uncovering
Factual
Errors in Large Language Models
✨
LLMs
arxiv.org
·
20h
Ziggit
and Large Language Models
✨
LLMs
ziggit.dev
·
6d
Adaptive
Thinking
: Large Language Models Know When to Think in
Latent
Space
💭
Reasoning Models
machinelearning.apple.com
·
2d
Cross-checking LLM
outputs
at scale without manual
overhead
⚙️
MLOps
asknestr.com
·
4d
·
r/OpenAI
FlowBot
: Inducing LLM Workflows with
Bilevel
Optimization and Textual Gradients
✨
LLMs
arxiv.org
·
20h
Granite
4.1 LLMs: How They’re Built
✨
LLMs
huggingface.co
·
1d
·
Hacker News
LLM
0.32a0
is a major backwards-compatible
refactor
🪄
Prompt Engineering
simonwillison.net
·
1d
·
Hacker News
From $200 to $30: Five
Layers
of LLM Cost Optimization
✨
LLMs
blog.dwornikowski.com
·
6d
·
Hacker News
Shorthand
for Thought: Compressing LLM Reasoning via Entropy-Guided
Supertokens
✨
LLMs
arxiv.org
·
20h
Large Language Models are Not
Table
Saws
✨
LLMs
agentultra.com
·
3d
·
Hacker News
Spurious
alignment between large language models and brains can emerge from non-robust methods and overlooked
confounds
🎯
Alignment Research
nature.com
·
3d
LLM-Flax
: Generalizable Robotic Task Planning via
Neuro-Symbolic
Approaches with Large Language Models
✨
LLMs
arxiv.org
·
20h
Speculative
Decoding vs
MoE
: 3.2x Cost Gap on Llama 3
💉
Prompt Injection
tildalice.io
·
3d
Language
Anchoring
: A
Systematic
Method for LLM Multilingual Adaptation
✨
LLMs
github.com
·
3d
·
Hacker News
Adaptive and Fine-grained Module-wise Expert Pruning for Efficient
LoRA-MoE
Fine-Tuning
⚙️
MLOps
arxiv.org
·
20h
Statistical
Structure and the Failure of
Pointing
: A System-Class Law for Compression-Based Generative Systems
✨
LLMs
philsci-archive.pitt.edu
·
4d
A
Systematic
Approach for Large Language Models
Debugging
✨
LLMs
arxiv.org
·
2d
« Page 1
·
Page 3 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help