LightReasoner: Can Small Language Models Teach Large Language Models Reasoning?
arxiv.orgยท18h
๐Ÿ”ML Language
Let's Write a Macro in Rust
hackeryarn.comยท6hยท
Discuss: Hacker News
๐Ÿฆ€Rust Macros
Cactus Language โ€ข Semantics 3
inquiryintoinquiry.comยท6h
๐Ÿ“‹Backus-Naur Form
Show HN: Realization Jsmn on a Pure Zig
github.comยท12hยท
Discuss: Hacker News
๐Ÿ“‹JSON Parsing
LINQ and Learning to Be Declarative
nickstambaugh.devยท1dยท
Discuss: Hacker News
๐Ÿ“‹Datalog
I built a translator for spatial thinking (because I can't interview in Python)
graemefawcett.caยท3hยท
Discuss: Hacker News
๐ŸŽฎLanguage Ergonomics
Slip โ€“ A Lisp System in JavaScript
lisperator.netยท8hยท
Discuss: Hacker News
๐ŸŒฑMinimal Lisps
Building the Reasoning Engine at Axiom
axiommath.aiยท2hยท
Discuss: Hacker News
๐ŸŽญProgram Synthesis
Assuring Agent Safety Evaluations By Analysing Transcripts
lesswrong.comยท12h
โœจEffect Inference
Experimenting with ACL2 and Claude Code
mikedodds.orgยท10hยท
Discuss: Hacker News
๐Ÿ’ฌInteractive REPLs
In-Depth Analysis: "Attention Is All You Need"
dev.toยท7hยท
Discuss: DEV
๐ŸŒฑMinimal ML
Homomorphism Problems in Graph Databases and Automatic Structures
arxiv.orgยท18h
๐Ÿ”—Unification Algorithms
Python 3.14 brings template string literals, free-threading, and stdlib subinterpreters
alternativeto.netยท1d
โšกIncremental Parsing
RND1: Simple, Scalable AR-to-Diffusion Conversion
radicalnumerics.aiยท1dยท
Discuss: Hacker News
๐Ÿ”ฌNanopasses
Building a BPE Tokenizer from scratch - optimizations & experiments
reddit.comยท2dยท
Discuss: r/LocalLLaMA
๐ŸŽ“Teaching Compilers
Writing regex is pure joy. You can't convince me otherwise.
triangulatedexistence.mataroa.blogยท20hยท
๐Ÿ“Rope Editors
Zippers: Making Functional "Updates" Efficient (2010)
goodmath.orgยท1dยท
๐ŸคZipper Structures
LangChain.js is overrated; Build your AI agent with a simple fetch call
blog.logrocket.comยท1d
๐Ÿš‚Cranelift Backend
Building MediBot: Integrating Django and Foundational NLP for Real-Time Medical Support Prototypes
future.forem.comยท1dยท
Discuss: DEV
๐Ÿ”„Subinterpreters
TRIM: Token-wise Attention-Derived Saliency for Data-Efficient Instruction Tuning
arxiv.orgยท1d
๐ŸชœRecursive Descent