Weird ideas welcome: VC fund looking to make science fiction factual
theregister.com·4h
🖥️Modern Terminals
Implicit `operator bool` participates in comparison
quuxplusone.github.io·1d
🦀Rust Verification
YouTube gets ~5% CTR lift on Shorts by replacing embedding tables with Semantic IDs
shaped.ai·1d
📊Feed Optimization
Lobsters Interview with Zdsmith
lobste.rs·1d·
Discuss: Lobsters
🔗Concatenative Programming
Revisiting Karpathy's 'Unreasonable Effectiveness of Recurrent Neural Networks'
gilesthomas.com·14h·
Discuss: Hacker News
🎧Learned Audio
Can AI Co-Design Distributed Systems? Scaling from 1 GPU to 1k
harvard-edge.github.io·17h·
Discuss: Hacker News
🎯Performance Proofs
Writing regex is pure joy. You can't convince me otherwise.
triangulatedexistence.mataroa.blog·1d·
Format Verification
Show HN: I built a SaaS in 8 weeks, solo, using our own AI platform
zine.ai·21h·
Discuss: Hacker News
🚀Indie Hacking
A Lisp Interpreter for Linux Shell Scripting
jakobmaier.at·1d·
Discuss: Hacker News
🔗Lisp
Stress-Testing Model Specs Reveals Character Differences among Language Models
arxiv.org·1d
📋Document Grammar
AsyncSpade: Efficient Test-Time Scaling with Asynchronous Sparse Decoding
arxiv.org·1d
⚙️Compression Benchmarking
Parameterized Complexity of Temporal Connected Components: Treewidth and k-Path Graphs
arxiv.org·3d
🎨Graph Coloring
Which Heads Matter for Reasoning? RL-Guided KV Cache Compression
arxiv.org·1d
📼Cassette Combinators
On Convex Functions of Gaussian Variables
arxiv.org·2d
📐Compression Mathematics
AI Fixed Coding, but Not the Bottleneck: Why Lisp, FP Still Matters
github.com·3d·
🔗Lisp
Why LLMs cannot reach GenAI, but why it looked like they could
haversine.substack.com·16h·
Discuss: Substack
🧠Intelligence Compression
Can Risk-taking AI-Assistants suitably represent entities
arxiv.org·1d
🔗Constraint Handling
Title: The Complexity of ChatGPT's Model Picker: A Comprehensive Analysis
dev.to·1d·
Discuss: DEV
🌳Context free grammars
TypeScript Flaws (2024)
intercaetera.com·4d·
Discuss: Hacker News
🎯Gradual Typing
OBCache: Optimal Brain KV Cache Pruning for Efficient Long-Context LLM Inference
arxiv.org·1d
💻Local LLMs