TIL: For long-lived LLM sessions, swapping KV Cache to RAM is ~10x faster than recalculating it. Why isn't this a standard feature?
⚡Performance Engineering
Flag this post
Fungus: The Befunge CPU(2015)
🔧DSPy
Flag this post
Cycle-accurate 6502 emulator as coroutine in Rust
🔵Go
Flag this post
Don't give Postgres too much memory
⚡Caching Strategies
Flag this post
How We Saved 70% of CPU and 60% of Memory in Refinery’s Go Code, No Rust Required.
💾Browser Caching
Flag this post
ReLook: Vision-Grounded RL with a Multimodal LLM Critic for Agentic Web Coding
📸Visual Regression Testing
Flag this post
L16 Benchmark: How Prompt Framing Affects Truth, Drift, and Sycophancy in GEMMA-2B-IT vs PHI-2
❓Elaborative Interrogation
Flag this post
Opportunistically Parallel Lambda Calculus
🔧DSPy
Flag this post
Challenging the Fastest OSS Workflow Engine
🔵Go
Flag this post
No Cap, This Memory Slaps: Breaking Through the OLTP Memory Wall
🗄Database Optimization
Flag this post
Linux/WASM
🕸️WASM
Flag this post
Designing Data-Intensive Applications — Chapter 1: Reliable, Scalable, and Maintainable Applications
⚡Caching Strategies
Flag this post
How We Found 7 TiB of Memory Just Sitting Around
🦭Podman
Flag this post
Loading...Loading more...