Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🎯 Fine-Tuning
LoRA, RLHF, instruction tuning, SFT, training
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
149456
posts in
14.5
ms
ALTO: Adaptive
LoRA
Tuning and
Orchestration
for Heterogeneous
LoRA
Training Workloads
🔀
LoRA
arxiv.org
·
2d
Show HN: Pre-training,
fine-tuning
, and
evals
platform
📊
AI Evals
oumi.ai
·
6d
·
Hacker News
Gemma
4
Fine-Tuning
Guide
🚀
MLOps
unsloth.ai
·
19h
·
Hacker News
Model
organisms
researchers should check whether high
LRs
defeat their model
organisms
🚀
MLOps
lesswrong.com
·
8h
Attn-QAT
: Making 4-Bit Attention Actually Work
🔀
LoRA
haoailab.com
·
1d
Writing an LLM from scratch, part
32h
– Interventions: full fat
float32
🔀
LoRA
gilesthomas.com
·
6d
·
Hacker News
,
Hacker News
milanm/AutoGrad-Engine
: A complete GPT language model (training and inference) in ~600 lines of pure C#, zero dependencies
💬
LLMs
github.com
·
17h
·
Hacker News
Post-SFT Alignment with
DPO
and
GRPO
: How to Fine-Tune Correctly, Part 6
⚙️
Inference
pub.towardsai.net
·
1d
Compression
technique
makes AI models
leaner
and faster while they're still learning
🔀
LoRA
techxplore.com
·
13h
RAG
vs
Fine-Tuning
: What I Learned Building a Real AI Product
📊
AI Evals
medium.com
·
2d
Fine-tuning
Whisper
to my speech: 27% to 6.5%
WER
💬
LLMs
vivekkairi.com
·
4d
·
Hacker News
How I Built a
Fine-Tuned
Medical AI App and
Deployed
It End-to-End on AWS
🚀
MLOps
medium.com
·
8h
MegaTrain
: Full Precision Training of
100B
+ Parameter Large Language Models on a Single GPU
💬
LLMs
lemmy.ml
·
1d
Using R to
Teach
R:
Lessons
for Software Development
🚀
MLOps
r-bloggers.com
·
8h
Frontier Pretraining Infrastructure Is Already Open Source: GPT-OSS on
TPU
with
MaxText
🚀
MLOps
patricktoulme.substack.com
·
3d
·
Substack
PiTorch
: ML on
Baremetal
Raspberry Pis
⚙️
Inference
masonjwang.com
·
1d
·
Hacker News
Low-Rank Key Value Attention: Reducing
KV
Cache Memory and
Maintaining
Head Diversity
🔀
LoRA
fin.ai
·
15h
·
Hacker News
SOTA
Normalization
Performance with Torch.compile
💬
LLMs
pytorch.org
·
2d
·
Hacker News
Data
Pipelines
for Machine Learning: From
Ingestion
to Training (2026 Guide)
🚀
MLOps
flexiana.com
·
20h
Agent Labs:
Workload-Harness
Fit
📊
AI Evals
akashbajwa.co
·
6d
·
Hacker News
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help