Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
Reddit - Dive into anything
reddit.com
ollama
recent container version
bugged
when using embedding.
github.com
·
61w
·
r/LocalLLM
new Mac Studio
cheapest
to run deepseek
671b
?
reddit.com
·
61w
·
r/LocalLLM
Why Are My LLMs Giving
Inconsistent
and Incorrect Answers for Grading Excel
Formulas
?
reddit.com
·
61w
·
r/LocalLLM
is the new Mac Studio with
m3
ultra good for a
70b
model?
reddit.com
·
61w
·
r/LocalLLM
Training a Rust 1.5B Coder
LM
with Reinforcement Learning (
GRPO
)
reddit.com
·
61w
·
r/LocalLLM
Meta
Aria
2
Glasses
and On-Board AI
reddit.com
·
61w
·
r/LocalLLM
Looking for some advice
reddit.com
·
61w
·
r/LocalLLM
What is the
feasibility
of starting a company on a local LLM?
reddit.com
·
61w
·
r/LocalLLM
External
GPU for LLM
reddit.com
·
61w
·
r/LocalLLM
Adding
a
P40
to my 1070 System - Some Questions!
reddit.com
·
61w
·
r/LocalLLM
Is it possible to run models on a pc with 2 gpu-s, but one is amd and one is nvidia? Has anyone
tried
that
reddit.com
·
61w
·
r/LocalLLM
AI
moderates
movies so editors don't have to: Automatic Smoking
Disclaimer
Tool (open source, runs 100% locally)
v.redd.it
·
62w
·
r/LocalLLM
Looking for the Best Local Only Model and Hardware (looking for low-end or high end) who can help
specifically
w/answering
questions about how to do things in t...
reddit.com
·
62w
·
r/LocalLLM
Top LLM Research of the Week:
Feb
24 - March 2 '25
reddit.com
·
62w
·
r/LocalLLM
What the Most
powerful
local LLM I can run on an M1 Mac Mini with 8GB
RAM
?
reddit.com
·
62w
·
r/LocalLLM
Ollama-OCR
reddit.com
·
62w
·
r/LocalLLM
OpenArc
v1.0.1: openai endpoints, gradio dashboard with chat- get faster inference on intel CPUs, GPUs and
NPUs
reddit.com
·
62w
·
r/LocalLLM
Generate
Entire
Projects with ONE prompt
reddit.com
·
62w
·
r/LocalLLM
Step-By-Step Tutorial: Train your own Reasoning model with Llama 3.1 (8B) + Google
Colab
+
GRPO
reddit.com
·
62w
·
r/LocalLLM
Advise
for Home Server
GPUs
for LLM
reddit.com
·
62w
·
r/LocalLLM
« Page 13
·
Page 15 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help