Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
🦙 Ollama
Specific
Local LLM Server, Model Management, API Server, Inference Engine
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
185677
posts in
21.9
ms
Local LLMs for
Zed
and
Obsidian
🧮
Intel MKL-DNN
ianreppel.org
·
2d
I finally found an open-source local LLM that actually
competes
with cloud AI
🔧
Abseil
xda-developers.com
·
12h
Self-hosted AI assistant architecture in Node.js — Telegram, WhatsApp, Discord and
Slack
powered by local
Ollama
🔄
Axum Middleware
documentcrustai.netlify.app
·
1h
·
r/selfhosted
Building a
tiny
local LLM
starter
for real projects
⚙️
JIT Compilation
mager.co
·
6d
Local LLM Proxy: Turn
Idle
LLM
Compute
into Universal Credits
☁️
Cloudflare Workers
github.com
·
20h
·
Hacker News
CySecurity
News - Latest Information Security and Hacking Incidents: Remote Exploitation Risk Emerges From
Ollama
Out-of-Bounds Read Flaw
🦀
Pingora
cysecurity.news
·
1d
·
Blogger
Local LLMs in 2026: What Actually Works on
Consumer
Hardware
🚀
Performance
studiomeyer.io
·
3d
·
DEV
PYTHALAB-MERA
: Validation-Grounded Memory, Retrieval, and Acceptance Control for Frozen-LLM Coding Agents
💬
Prompt Engineering
arxiv.org
·
19h
In a
quest
to
becoming
AI-independent
🌀
Naiad
adlrocha.substack.com
·
2d
·
Substack
Running a Local LLM on a 12-year-old
Raspberry
Pi
1
🥧
Raspberry Pi
blog.adafruit.com
·
8h
Local AI needs to be the
norm
🧩
mimalloc
news.ycombinator.com
·
1d
·
Hacker News
Tracing tokens through Llama 3.1
8B
inference on
H100s
📱
Edge AI
krithik.xyz
·
3d
·
Hacker News
Yes, local LLMs are ready to ease the
compute
strain
🧩
mimalloc
theregister.com
·
2d
·
Hacker News
Ollama
vulnerability highlights danger of AI frameworks with
unrestricted
access
🛡️
AI Security
csoonline.com
·
5d
May 11, 2026 (#4665)
⚓
Anchors
alvinashcraft.com
·
1d
Pinning
a Local LLM to an RTX 5090: Five Hours, Several
Faceplants
, One Solid Setup
🧩
mimalloc
buraak.com
·
6d
·
Hacker News
Ollama
Out-of-Bounds
Read Vulnerability Allows Remote Process Memory Leak
🔓
Binary Exploitation
thehackernews.com
·
2d
·
r/LLM
RT by @
AravSrinivas
: We’ve developed our own inference engine Runtime-Optimized Serving Engine (ROSE) to serve models ranging from embeddings to
trillion-parame
...
📱
Edge AI
twitter.macworks.dev
·
6d
asakin/dragoman
: A small CLI that lets Claude Code reach non-Anthropic models — Ollama, Perplexity, OpenAI, Gemini — through one verb the existing subagent runtime can call.
🌀
Naiad
github.com
·
7h
·
Hacker News
I built a free local LLM
workflow
with my 10-year-old-GPU, and it's
reliable
enough to replace the cloud
🔄
Hardware Transactional Memory
xda-developers.com
·
2d
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help