Oct 29 2025 : Build Your Own ArduTouch Synthesizer Workshop
nycresistor.com·54m
🎹MIDI Archaeology
Weird ideas welcome: VC fund looking to make science fiction factual
theregister.com·8h
🖥️Modern Terminals
Physical Warp Drives
arxiv.org·2h·
Discuss: Hacker News
🌡️Preservation Physics
Revisiting Karpathy's 'Unreasonable Effectiveness of Recurrent Neural Networks'
gilesthomas.com·18h·
Discuss: Hacker News
🎧Learned Audio
Can AI Co-Design Distributed Systems? Scaling from 1 GPU to 1k
harvard-edge.github.io·21h·
Discuss: Hacker News
🎯Performance Proofs
Writing regex is pure joy. You can't convince me otherwise.
triangulatedexistence.mataroa.blog·1d·
Format Verification
Show HN: I built a SaaS in 8 weeks, solo, using our own AI platform
zine.ai·1d·
Discuss: Hacker News
🚀Indie Hacking
Stress-Testing Model Specs Reveals Character Differences among Language Models
arxiv.org·1d
📋Document Grammar
Building Luca: An AI Agent for Finance and Accounting Workflows
leapfin.com·1h·
Discuss: Hacker News
🔗Constraint Handling
Loyca.ai – An open-source, local-first AI assistant with contextual awareness
github.com·1h·
Discuss: Hacker News
🌀Brotli Internals
AsyncSpade: Efficient Test-Time Scaling with Asynchronous Sparse Decoding
arxiv.org·1d
⚙️Compression Benchmarking
Parameterized Complexity of Temporal Connected Components: Treewidth and k-Path Graphs
arxiv.org·3d
🎨Graph Coloring
Which Heads Matter for Reasoning? RL-Guided KV Cache Compression
arxiv.org·1d
📼Cassette Combinators
On Convex Functions of Gaussian Variables
arxiv.org·2d
📐Compression Mathematics
Can Risk-taking AI-Assistants suitably represent entities
arxiv.org·1d
🔗Constraint Handling
AI Fixed Coding, but Not the Bottleneck: Why Lisp, FP Still Matters
github.com·3d·
🔗Lisp
Why LLMs cannot reach GenAI, but why it looked like they could
haversine.substack.com·21h·
Discuss: Substack
🧠Intelligence Compression
Title: The Complexity of ChatGPT's Model Picker: A Comprehensive Analysis
dev.to·1d·
Discuss: DEV
🌳Context free grammars
TypeScript Flaws (2024)
intercaetera.com·4d·
Discuss: Hacker News
🎯Gradual Typing
OBCache: Optimal Brain KV Cache Pruning for Efficient Long-Context LLM Inference
arxiv.org·1d
💻Local LLMs