Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🦙 Ollama
Specific
Local LLMs, Self-hosted AI, Privacy-First AI, Offline Models
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
15577
posts in
15.4
ms
🎲
Setting
Up a Local LLM
📘
how to use AI
blog.miloslavhomer.cz
·
4d
·
…
Running local models on
Macs
gets faster with Ollama's
MLX
support
🚀
Performance
arstechnica.com
·
1d
·
Hacker News
·
…
Lemonade
by AMD: a fast and open source local LLM server using GPU and
NPU
💾
Local-First Software
lemonade-server.ai
·
10h
·
Hacker News
·
…
Learning by Building, Part 3:
Caltrain
Bot
, Local LLMs, and Reality
📘
how to use AI
dima.us.kg
·
3d
·
…
Ollama
is now powered by
MLX
on Apple Silicon in preview
🚀
Performance
ollama.com
·
3d
·
Lobsters
,
Hacker News
·
…
RED-BASE/SpruceChat
: A tiny AI that lives inside your
handheld
. Local LLM chat on spruceOS.
🕹️
PICO-8
github.com
·
5d
·
r/LocalLLaMA
·
…
For
Null
Meetup
👥
shared spaces
estudely.com
·
5d
·
…
Eduardo
García
Llama, NASA mission engineer to the Moon: ‘There are two moments when our hearts will be in our
mouths
’
🚀
NASA
english.elpais.com
·
3d
·
…
zolotukhin/zinc
: Zig INferenCe Engine — LLM inference for AMD
RDNA3/RDNA4
GPUs via Vulkan
🚀
Performance
github.com
·
4d
·
Hacker News
,
r/LocalLLaMA
,
r/Zig
·
…
Breaking change in
llama-server
?
🤖
GPT4ALL
github.com
·
5d
·
r/LocalLLaMA
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help