TIL: For long-lived LLM sessions, swapping KV Cache to RAM is ~10x faster than recalculating it. Why isn't this a standard feature?
🔮Prefetching
Flag this post
Engineering a Rust optimization quiz
fasterthanli.me·8h
🦀Rust
Flag this post
PCI Resizable BAR Improvements Heading To Linux 6.19
phoronix.com·6h
⚡Zero-Copy APIs
Flag this post
AMD Ryzen 9 9900X3D vs Intel Core i9-14900K Faceoff — Intel's old-school flagship chip versus AMD's bleeding-edge tech
tomshardware.com·6h
🖥️Hardware Architecture
Flag this post
Secretly Loyal AIs: Threat Vectors and Mitigation Strategies
lesswrong.com·19h
🛡️AI Security
Flag this post
2025 Holiday Readiness Checklist (Page Speed Edition!)
speedcurve.com·18h
🚀Web Performance
Flag this post
Pressure to change
maryrosecook.com·8h
👨💻Software development practices
Flag this post
Context Engineering: The Foundation for Reliable AI Agents
thenewstack.io·23h
🪄Prompt Engineering
Flag this post
Fungus: The Befunge CPU(2015)
⚙️Mechanical Sympathy
Flag this post
Tech startups discovered profitability after interest rates made venture capital scarce
nearlyright.com·9h
🚀Startups
Flag this post
Beyond Brute Force: 4 Secrets to Smaller, Smarter, and Dramatically Cheaper AI
hackernoon.com·4h
🤖AI
Flag this post
My first fifteen compilers (2019)
💻Programming languages
Flag this post
Linux Kernel Ported to WebAssembly
📦WASM
Flag this post
A portable picokernel for async I/O
💫IO_uring
Flag this post
Loading...Loading more...