Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
LocalLlama
reddit.com
Announcing 1-bit
Bonsai
: The First
Commercially
Viable 1-bit LLMs
prismml.com
·
5w
·
Hacker News
,
Hacker News
,
r/LocalLLaMA
,
r/singularity
JackChen-me/open-multi-agent
: Production-grade multi-agent orchestration framework.
Model-agnostic
, supports team collaboration, task scheduling, and inter-agent communication.
github.com
·
5w
·
Hacker News
,
r/ClaudeAI
,
r/LocalLLaMA
[Developing situation]: Why you need to be
careful
giving your local LLMs tool access: OpenClaw just
patched
a Critical sandbox escape
github.com
·
5w
·
r/LocalLLaMA
Parsing, Indexing, and Self-Hosting a Library of 10,000 GLP-1 Studies on a 6-Year-Old PC with SQLite,
Docling
, and a Little Bit of Elbow
Grease
elliotbroe.com
·
5w
·
r/LocalLLaMA
Environment
variables
code.claude.com
·
6w
·
r/LocalLLaMA
avelino/mcp
: CLI that turns MCP servers into terminal
commands
, single binary
github.com
·
5w
·
r/LocalLLaMA
kobie3717/ai-iq
: AI-IQ: Persistent context system for AI coding assistants. AI doesn't need knowledge — it needs relevant context. Hybrid search (
FTS
+semantic), graph intelligence, zero config.
github.com
·
5w
·
r/LocalLLaMA
Qwen3.5
Omni
Plus World Premiere
youtu.be
·
5w
·
r/LocalLLaMA
Release
llamafile
v0.10.0
github.com
·
5w
·
r/LocalLLaMA
Agents of
Chaos
arxiv.org
·
10w
·
Hacker News
,
Hacker News
,
r/LocalLLaMA
jaberio/LlamaStick
: 🧠 Run AI models anywhere — zero-install, portable LLM toolkit for USB drives. Cross-platform CLI for Windows, macOS & Linux. Powered by llamafile.
github.com
·
5w
·
r/LocalLLaMA
zeroclaw-labs/zeroclaw
: Fast, small, and fully autonomous AI assistant infrastructure — deploy
anywhere
, swap anything 🦀
github.com
·
11w
·
Hacker News
,
Hacker News
,
r/LocalLLaMA
RSBalchII/anchor-engine-node
: A privacy-first context engine for any human facing llm interaction. Bring the right context when you need it, export or save results, then clear everything and start a new chat without bringing
baggage
- built for individuals who want better outputs without giving up control of their data.
github.com
·
8w
·
Hacker News
,
Hacker News
,
r/LocalLLaMA
,
r/SideProject
Git-aware agent memory that
syncs
across a team — no cloud, all local
embeddings
github.com
·
5w
·
r/LocalLLaMA
,
r/SideProject
,
r/node
RhinoDevel/mt
_llm: Pure C
wrapper
library to use llama.cpp with Linux and Windows as simple as possible.
github.com
·
5w
·
r/LocalLLaMA
LH-Tech-AI/dove-detector: Two-stage AI pigeon detector:
YOLO26
spots birds in ~50ms, CLIP
classifies
pigeon/dove – CPU only, no GPU needed. 🐦🔊
github.com
·
5w
·
r/LocalLLaMA
Built a surgical weight editor for local
GGUF
models, edit individual
weights
directly, no GPU, no training loop (open source)
github.com
·
5w
·
DEV
,
r/LocalLLaMA
Jan.AI as these
seem
to be what most people are using,
although
I'm open to suggestions for other inference engines.
jan.ai
·
162w
·
r/LocalLLaMA
TurboQuant
style quantization for weights was already
researched
months ago: FP-Quant
github.com
·
5w
·
r/LocalLLaMA
Achilles1089/duplex-chat
: AI that thinks while you type. Speculative inference protocol that eliminates perceived latency in AI chat.
github.com
·
5w
·
r/LocalLLaMA
« Page 13
·
Page 15 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help