Myths Programmers Believe about CPU Caches
🧠Memory Models
Flag this post
Challenging the Fastest OSS Workflow Engine
📡Erlang BEAM
Flag this post
TIL: For long-lived LLM sessions, swapping KV Cache to RAM is ~10x faster than recalculating it. Why isn't this a standard feature?
🧠Memory Consistency
Flag this post
Fungus: The Befunge CPU(2015)
🌳B+ Trees
Flag this post
Understanding How Computers Actually Work
📦Compact Data
Flag this post
Superhuman AI for Multiplayer Poker
🐹Minimal Go
Flag this post
Accelerating AI inferencing with external KV Cache on Managed Lustre
cloud.google.com·1d
⚡Cache-Aware Algorithms
Flag this post
Some Fun Videos on Optimizing NES Code
bumbershootsoft.wordpress.com·7h
🌊Loop Invariant Motion
Flag this post
Don't give Postgres too much memory
🧠Memory Models
Flag this post
Well-Typed.Com: Case Study: Debugging a Haskell space leak
well-typed.com·2d
📚Stack Allocation
Flag this post
How fast can an LLM go?
🗺️Region Inference
Flag this post
To grow, we must forget… but now AI remembers everything
doc.cc·12h
💾Persistent Heaps
Flag this post
Loading...Loading more...