Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
Reddit - Dive into anything
reddit.com
Dive
: An
OpenSource
MCP Client and Host for Desktop
reddit.com
·
65w
·
r/LocalLLM
Planning a dual
RX
7900
XTX
system, what should I be aware of?
reddit.com
·
65w
·
r/LocalLLM
Quickly
deploy
Ollama
on the most affordable GPUs on the market
reddit.com
·
65w
·
r/LocalLLM
Best Open-source AI models?
reddit.com
·
65w
·
r/LocalLLM
1-Click
AI Tools in your browser -
completely
free to use with local models
reddit.com
·
65w
·
r/LocalLLM
Truly
Uncensored
LLM?
reddit.com
·
65w
·
r/LocalLLM
I’m going to try
HP
AI
Companion
next week
reddit.com
·
65w
·
r/LocalLLM
Advice on which LLM on Mac mini
M4
pro 24gb RAM for research-based
discussion
reddit.com
·
65w
·
r/LocalLLM
Is 2x NVIDIA RTX 4500
Ada
Enough
for which LLMs?
reddit.com
·
65w
·
r/LocalLLM
I built an LLM inference
VRAM/GPU
calculator – no more
guessing
required!
llm-gpu-memory-calculater.linpp2009.com
·
65w
·
r/LocalLLM
Github and local LLM
reddit.com
·
65w
·
r/LocalLLM
How do you feel about Interview
Hammer
, my AI-powered tool for real-time interview
assistance
?
v.redd.it
·
65w
·
r/LocalLLM
Any way to disable “Thinking” in Deepseek
distill
models like the Qwen
7/14b
?
reddit.com
·
65w
·
r/LocalLLM
Built My First
Recursive
Agent (
LangGraph
) – Looking for Feedback & New Project Ideas
reddit.com
·
65w
·
r/LocalLLM
Best way to go for
lots
of
instances
?
reddit.com
·
65w
·
r/LocalLLM
How to make
ChatOllama
use more GPU instead of
CPU
?
preview.redd.it
·
65w
·
r/LocalLLM
Structured
output with
Pydantic
using non OpenAI models ?
platform.openai.com
·
65w
·
r/LocalLLM
Deployed Deepseek R1
70B
on 8x RTX
3080s
: 60 tokens/s for just $6.4K - making AI inference accessible with consumer GPUs
x.com
·
65w
·
r/LocalLLM
Show HN: I built a tool for
renting
cheap
GPUs
for inference
open-scheduler.com
·
72w
·
Hacker News
,
r/LocalLLM
LMStudio
-
Larger
models not using GPU for compute at all
preview.redd.it
·
65w
·
r/LocalLLM
« Page 25
·
Page 27 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help