Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
Reddit - Dive into anything
reddit.com
Minimal
, org-level
wrapper
for LLM calls?
reddit.com
·
62w
·
r/LocalLLM
Github -
open-webui/open-webui
: User-friendly
WebUI
for LLMs (
Formerly
Ollama
WebUI
)
github.com
·
66w
·
r/LocalLLM
I'm running
Ollama
for a project and I wanted to know if there's easy documentation on how to
fine-tune
or RAG an LLM ?
reddit.com
·
62w
·
r/LocalLLM
📊 AI
Priorities
: Speed vs.
Accuracy
? Vote Now! (Linked Discussion Inside)
reddit.com
·
62w
·
r/LocalLLM
I'm lost about what llm model should I get for my
hardware
.
reddit.com
·
62w
·
r/LocalLLM
GPT 4.5 has top rating on
LM
arena
reddit.com
·
62w
·
r/LocalLLM
I tested
inception
labs
new diffusion LLM and it's game changing. Questions...
reddit.com
·
62w
·
r/LocalLLM
r/ChatGPT
reddit.com
·
62w
·
r/ChatGPT
I need FREE line segmentation software to use with
Calamari
AI OCR training
modles
(as a home user Octopus is unsuitable)
reddit.com
·
62w
·
r/LocalLLM
Feedback Required! on Reasoning Model
Trained/finetuned
using
GRPO
reddit.com
·
62w
·
r/LocalLLM
r/ollama
reddit.com
·
62w
·
r/ollama
I am
completly
lost at
setting
up a Local LLM
reddit.com
·
62w
·
r/LocalLLM
Please 🥺 Can anyone explain why I don't get any text answer from model (
Janus-Pro-1b
), which running locally with pocket
pal
(Android app)?
reddit.com
·
62w
·
r/LocalLLM
DeepSeek
has won
reddit.com
·
62w
·
r/LocalLLM
14b
models too dumb for
summarization
reddit.com
·
62w
·
r/LocalLLM
NVIDIA 4090
flagship
GPU—super powerful and
stylish
!
reddit.com
·
62w
·
r/LocalLLM
"The
Pace
That Concerns Me" - A software engineer's perspective on the
breathtaking
speed of AI development
fanyangmeng.blog
·
62w
·
r/LocalLLM
,
r/LocalLLaMA
Can anyone tell me what could’ve been causing this?
Reinstalling
the model fixed it, but I’m now left wondering what I just
witnessed
.
v.redd.it
·
62w
·
r/LocalLLM
Any small LLMs that were only trained through
selective
distillation from larger models and
RL
(no pre-training with next token prediction)?
reddit.com
·
62w
·
r/LocalLLM
Currently
running 4060 ti 16gb. Looking to expand. What's my best option of the 3
presented
?
reddit.com
·
62w
·
r/LocalLLM
« Page 14
·
Page 16 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help