Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🦙 Simple finetuning LLMs
Ollama
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
4323
posts in
60.3
ms
Karpathy
's
Micro
LLM in JavaScript
github.com
·
11h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
Compare
up to 5 LLMs side-by-side, then
fuse
the best answers
llmcode.ai
·
2d
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
aprxi/talu
:
Talu
is a single-binary, local-first LLM runtime with a Zig core and multi-language bindings — CLI, Python API, HTTP server, plugin-extensible Web UI, structured output, quantization, embeddings, and unified local/remote model routing.
github.com
·
11h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
8
Standards
for Shipping Production LLM Features
teotti.com
·
2h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
GLM
5 is already on
huggingface
!
huggingface.co
·
1d
·
Discuss:
r/LocalLLaMA
🔵
LLM frameworks and AI libraries for TypeScript
llama.cpp
guide - Running LLMs
locally
, on any hardware, from scratch
blog.steelph0enix.dev
·
3d
⚙️
Finetuning LLMs faster with less memory
Fluent
mlajtos.github.io
·
1d
·
Discuss:
Lobsters
🔵
LLM frameworks and AI libraries for TypeScript
LLM Performance in
Astro
, React,
Tailwind
and Cloudflare
10xbench.ai
·
2d
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
[
TUHS
] bare m4 (was BTL
summmer
employees)
tuhs.org
·
1d
·
Discuss:
Lobsters
🦀
Rust language vector embeddings
Show HN: I built an AI executive
assistant
you use through
iMessage
getattache.com
·
1d
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
MiniMaxAI
MiniMax-M2.5 has
230b
parameters and 10b active parameters
openhands.dev
·
5h
·
Discuss:
r/LocalLLaMA
⚙️
Finetuning LLMs faster with less memory
AI
Inference
Needs A
Mix-And-Match
Memory Strategy
semiengineering.com
·
18h
⚙️
Finetuning LLMs faster with less memory
Ask HN: How do you
audit
LLM code in programming
languages
you don't know?
news.ycombinator.com
·
8h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
Proof-oriented
Programming in F*
fstar-lang.org
·
22h
·
Discuss:
Lobsters
🔵
LLM frameworks and AI libraries for TypeScript
mradermacher/Qwen3-Coder-Next-REAM-GGUF
huggingface.co
·
17h
·
Discuss:
r/LocalLLaMA
🔵
LLM frameworks and AI libraries for TypeScript
Show HN: Stock
skill
for
OpenClaw
6,500 stocks, 900 days of data
bananafarmer.app
·
11h
·
Discuss:
Hacker News
🪟
Tauri
The Problem With LLMs
deobald.ca
·
2d
·
Discuss:
Lobsters
,
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
Show HN:
MicroGPT
in 243 Lines –
Demystifying
the LLM Black Box
news.ycombinator.com
·
14m
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
Judge rules that LLM
provided
legal advice is open to discovery [
pdf
]
storage.courtlistener.com
·
1h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
Running LLMs in-browser via
WebGPU
, Transformers.js, and Chrome's Prompt API—no
Ollama
, no server
noaibills.app
·
5d
·
Discuss:
DEV
,
r/LocalLLaMA
,
r/SideProject
,
r/selfhosted
🔵
LLM frameworks and AI libraries for TypeScript
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help