Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🤖 LLM
Specific
Large Language Models, GPT, Claude, ChatGPT, Transformers
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
184380
posts in
21.5
ms
Paper page - Large Language Models Explore by
Latent
Distilling
✨
LLMs
huggingface.co
·
3h
How to Build Your Own
Language-Specific
LLM [Full
Handbook
]
✨
LLMs
freecodecamp.org
·
5d
Network Edge Inference for Large Language Models:
Principles
,
Techniques
, and Opportunities
⚡
Edge AI
arxiv.org
·
2d
LLMs are the
worlds
most powerful
autocomplete
✨
LLMs
alfredvc.no
·
16h
·
Hacker News
LaDiR
: Latent Diffusion
Enhances
LLMs for Text Reasoning
✨
LLMs
machinelearning.apple.com
·
1d
·
Hacker News
Why Model
Collapse
in LLMs is
Inevitable
With Self-Learning
✨
LLMs
hackaday.com
·
15h
·
Hacker News
AmSach/kvquant
: Drop-in KV cache compressor for local LLM inference - Run 70B models on 8GB RAM
📱
Edge AI Optimization
github.com
·
5h
·
DEV
Text
Summarization
with
Scikit-LLM
✨
LLMs
machinelearningmastery.com
·
3d
Large Language Models in
Communications
✨
LLMs
databricks.com
·
1d
LLM
0.32a0
is a major backwards-compatible
refactor
🪄
Prompt Engineering
simonwillison.net
·
22h
·
Hacker News
Information
Extraction
from Electricity
Invoices
with General-Purpose Large Language Models
✨
LLMs
arxiv.org
·
13h
The
Recurrent
Transformer:
Greater
Effective Depth and Efficient Decoding (5 minute read)
🖼️
Multimodal AI
alphaxiv.org
·
1d
Show HN: How LLMs Work – Interactive visual guide based on
Karpathy
's
lecture
✨
LLMs
ynarwal.github.io
·
6d
·
Hacker News
cauchy221/Alignment-Whack-a-Mole-Code
: The official code repo of Alignment Whack-a-Mole: Finetuning Activates
Verbatim
Recall of Copyrighted Books in Large Language Models
✨
LLMs
github.com
·
14h
·
Hacker News
A
Primer
on LLM Post-Training
✨
LLMs
pytorch.org
·
2d
·
Hacker News
Training a Transformer to
Compose
One Step Per Layer (and
Proving
It)
✨
LLMs
lesswrong.com
·
3d
Speculative
Decoding vs
MoE
: 3.2x Cost Gap on Llama 3
💉
Prompt Injection
tildalice.io
·
2d
Identifying the Achilles' Heel: An Iterative Method for
Dynamically
Uncovering
Factual
Errors in Large Language Models
✨
LLMs
arxiv.org
·
13h
epscylonb/1386.ai.rocm
: A lightweight transformer language model built from scratch in PyTorch, trained on a single consumer GPU with a full pipeline for data processing, pretraining, and instruction tuning.
⚙️
MLOps
github.com
·
2d
·
Hacker News
Language Generation in the
Limit
✨
LLMs
openreview.net
·
5d
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help