Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🦙 Ollama
Specific
Local LLM Server, Model Management, API Server, Inference Engine
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
122084
posts in
29.2
ms
g1ibby/llm-deploy
: Tool to manage ollama model on
vast.ai
🚀
MLOps
github.com
·
7h
Local AI in 2026:
Ollama
Benchmarks
, $0 Inference, and the End of Per-Token Pricing
🧩
mimalloc
dev.to
·
6d
·
DEV
I
connected
my local LLM to my browser and it changed how I automated
tasks
🕸️
WASM
xda-developers.com
·
2d
Self-Hosted AI on a 24GB GPU:
OpenClaw
+
Ollama
Setup Guide for Windows
🎨
WGPU
blog.zolty.systems
·
1h
How Many Tries Does It Take?
Iterative
Self-Repair in LLM Code Generation Across Model
Scales
and Benchmarks
🔧
LLVM IR Optimization
arxiv.org
·
14h
Build a Sovereign Local AI Stack: Ollama + Open
WebUI
+
pgvector
2026 | Local AI & On-Device Inference
📦
Linux cgroups v2
vucense.com
·
2d
·
r/selfhosted
Running
Gemma
4 Locally with
Ollama
on Your PC
🧊
Iced
analyticsvidhya.com
·
6d
How to
Implement
Tool Calling with
Gemma
4 and Python
⚡
FastAPI
machinelearningmastery.com
·
22h
Using Claude Code with Local LLM Models: The Complete Guide
⚡
Ruff
jonathansblog.co.uk
·
10h
EU's
Exposed
AI Infrastructure
🧩
mimalloc
insecurestack.substack.com
·
6d
·
Substack
Vane
(
Perplexica
2.0) Quickstart With Ollama and llama.cpp
🥖
Bun
dev.to
·
2d
·
DEV
aweussom/NoLlama
: NPU Ollama - An Ollama/OpenAI compatible API for Intel OpenVINO compatible computers
🧊
Iced
github.com
·
2d
·
Hacker News
I used my local LLM to
rebuild
my workflow from
scratch
, and it was better than I expected
💬
Prompt Engineering
xda-developers.com
·
1d
From
ollama
run to Tokens: What Really Happens When You Run an LLM
Locally
🧩
mimalloc
dev.to
·
3h
·
DEV
Ollama
is still the
easiest
way to start local LLMs, but it's the worst way to keep running them
🔄
Hardware Transactional Memory
xda-developers.com
·
5d
ENTERPILOT/GoModel
: High-performance AI gateway written in Go - unified OpenAI-compatible API for OpenAI, Anthropic, Gemini, Groq, xAI & Ollama. LiteLLM alternative with observability, guardrails & streaming.
⚡
FastAPI
github.com
·
2d
·
r/selfhosted
Ollama
Pipelines on Mac: Chain Models Without Writing
Glue
Code
🌀
Naiad
dev.to
·
17h
·
DEV
How I Built an Autonomous Dataset Generator with
CrewAI
+ Ollama (72-hour run, 1,065
entries
)
🌀
Naiad
dev.to
·
2h
·
DEV
ollama/ollama
v0.20.4-rc2
🎮
QEMU TCG
github.com
·
6d
How to Run
Gemma
4 Locally With Ollama, llama.cpp, and
vLLM
🔨
LLVM
dev.to
·
2d
·
DEV
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help