Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🧠 LLM
Specific
Large Language Model, GPT, Transformer
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
17609
posts in
31.0
ms
How
Transformers
Power LLMs: Step-by-Step Guide
🔤
Tokenization
analyticsvidhya.com
·
6d
·
…
The LLM Is the New
Parser
🦙
Ollama
github.com
·
15h
·
DEV
·
…
Level Up Your LLM: From
Prompting
to
Fine-Tuning
for Real-World Results
✍️
Prompt Engineering
dev.to
·
2d
·
DEV
·
…
llm-echo
0.3
🦙
Ollama
simonwillison.net
·
1d
·
…
Reliable LLM JSON Output: Few-Shot
Prompting
& Robust
Parsing
💬
NLP
dev.to
·
4h
·
DEV
·
…
Speculative
Decoding
: How LLMs Generate Text 3x Faster
🤖
LLM Inference
analyticsvidhya.com
·
1d
·
…
The
Evolution
of Natural Language
Processing
: A Journey from 1960 to 2020
💬
NLP
dev.to
·
9h
·
DEV
·
…
ml-explore/mlx-lm
: Run LLMs with
MLX
🦙
Ollama
github.com
·
1d
·
…
Unlock
AI on Your Laptop: A Deep Dive into Small Language Models (
SLMs
)
🦙
Ollama
dev.to
·
5d
·
DEV
·
…
Build a RAG Pipeline in Java (Text
Vector
LLM, No Paid
APIs
)
🔗
RAG
dev.to
·
14h
·
DEV
·
…
Unlock the Power of Private AI: Build a Local RAG Pipeline with
LangGraph
,
Ollama
& Vector Databases
🏛
Sovereign AI Infrastructure
dev.to
·
18h
·
DEV
·
…
LLM
Fine-Tuning
: The Complete Guide to
Customizing
Language Models (2026)
🤖
LLM Inference
dev.to
·
6d
·
DEV
·
…
Build a Production‑Ready
SQL
Evaluation
Engine for LLMs
🦙
Ollama
dev.to
·
2d
·
DEV
·
…
Scaling LLMs at the Edge: A journey through distillation,
routers
, and
embeddings
🦙
Ollama
dev.to
·
1d
·
DEV
·
…
Symbols
Not
Chunks
: 3.9x Less Tokens
🔤
Tokenization
dev.to
·
4d
·
DEV
·
…
Before LLMs Could
Predict
, They Had to
Count
💬
NLP
dev.to
·
1d
·
DEV
·
…
Save money on AI using those
permanent
free LLM
APIs
🤖
LLM Inference
dev.to
·
4d
·
DEV
·
…
Three Things Had to
Align
: The Real Story Behind the LLM
Revolution
💬
NLP
dev.to
·
1d
·
DEV
·
…
Build an End-to-End
RAG
Pipeline
for LLM Applications
🔗
RAG
dev.to
·
1d
·
DEV
·
…
Semantic
Caching
for LLMs: Faster
Responses
, Lower Costs
🦙
Ollama
dev.to
·
3d
·
DEV
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help