TIL: For long-lived LLM sessions, swapping KV Cache to RAM is ~10x faster than recalculating it. Why isn't this a standard feature?
🔮Prefetching
Flag this post
Fungus: The Befunge CPU(2015)
⚙️Mechanical Sympathy
Flag this post
2025 Holiday Readiness Checklist (Page Speed Edition!)
speedcurve.com·21h
🚀Web Performance
Flag this post
Engineering a Rust optimization quiz
fasterthanli.me·10h
🦀Rust
Flag this post
Scala vs F#
alexn.org·15h
💻Programming languages
Flag this post
Research roundup: 6 cool science stories we almost missed
arstechnica.com·5h
🍄Mycorrhizal Networks
Flag this post
Beyond Brute Force: 4 Secrets to Smaller, Smarter, and Dramatically Cheaper AI
hackernoon.com·7h
🤖AI
Flag this post
AMD Ryzen 9 9900X3D vs Intel Core i9-14900K Faceoff — Intel's old-school flagship chip versus AMD's bleeding-edge tech
tomshardware.com·9h
🖥️Hardware Architecture
Flag this post
Gemini Links 01/11/2025: FIFO and Gemini Age Survey
techrights.org·6h
🪄Prompt Engineering
Flag this post
Linux 6.18 Kernel Happenings, Python 3.14, NTFSPLUS & Other October Highlights
phoronix.com·11h
🤖AI
Flag this post
A portable picokernel for async I/O
💫IO_uring
Flag this post
We’re back with episode 2 of 1 IDEA! Today, Vinay Perneti (VP of Eng @ Augment Code) shares his own Bottleneck Test
🧭Content Discovery
Flag this post
Freewriting in my head, and overcoming the “twinge of starting”
lesswrong.com·20h
📝Write-Ahead Log
Flag this post
Loading...Loading more...