Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🧠 LLMs
Specific
Large Language Models, GPT, Gemini, Claude
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
184722
posts in
14.7
ms
A
Primer
on LLM Post-Training
🔗
LangChain
pytorch.org
·
2d
·
Hacker News
AutoSP
: Long-Context LLM Training via Compiler-Based Sequence
Parallelism
🔗
LangChain
pytorch.org
·
1d
·
Hacker News
Ziggit
and Large Language Models
🔗
LangChain
ziggit.dev
·
6d
Building a Private Karpathy-Style LLM Wiki With
gbrain
and
gstack
🔗
LangChain
blog.saeloun.com
·
2d
Statistical
Structure and the Failure of
Pointing
: A System-Class Law for Compression-Based Generative Systems
🔬
Anthropic
philsci-archive.pitt.edu
·
2d
Large Language Models are Not
Table
Saws
🔗
LangChain
agentultra.com
·
3d
·
Hacker News
cauchy221/Alignment-Whack-a-Mole-Code
: The official code repo of Alignment Whack-a-Mole: Finetuning Activates
Verbatim
Recall of Copyrighted Books in Large Language Models
🔗
LangChain
github.com
·
16h
·
Hacker News
AI
hallucinations
,
bias
and data leaks: Expanding LLM risk landscape
⚖️
AI Regulation
devdiscourse.com
·
2d
Reimagining Kernel Generation at the
PTX
Layer: An LLM System Learning from
DSLs
to Outperform Them
🔗
LangChain
standardkernel.com
·
3d
·
Hacker News
The
environmental
impact of LLMs vs.
SLMs
🔗
LangChain
techtarget.com
·
2d
FlowBot
: Inducing LLM Workflows with
Bilevel
Optimization and Textual Gradients
🔗
LangChain
arxiv.org
·
15h
The
Accordion
Pattern: Why I stopped writing one
fat
LLM prompt
🔗
LangChain
gw.portal.ldxhub.io
·
1d
·
DEV
llm 0.31
🔗
LangChain
simonwillison.net
·
5d
The
Recurrent
Transformer:
Greater
Effective Depth and Efficient Decoding (5 minute read)
🤖
AI
alphaxiv.org
·
1d
Show HN: "Be
horse
." – a diffusion language model on an
M2
Air
🤖
AI
boesch.dev
·
1d
·
Hacker News
Winning a
Kaggle
Competition with Generative AI–
Assisted
Coding
🔗
LangChain
developer.nvidia.com
·
6d
Show HN: I built a 2nd-order
PyTorch
optimizer
for LLMs that runs on 16GB GPUs
🔗
LangChain
news.ycombinator.com
·
1d
·
Hacker News
Speculative
Decoding vs
MoE
: 3.2x Cost Gap on Llama 3
🔗
LangChain
tildalice.io
·
3d
Are LLMs not getting better?
🔗
LangChain
lesswrong.com
·
1d
Not Everything Needs an LLM —
Dave
Hall at AI Engineer
Melbourne
2026
🔗
LangChain
webdirections.org
·
2d
Sign up or log in to see more results
Sign Up
Login
« Page 2
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help