Context Reuse, KV Cache, Inference Optimization, Token Efficiency
Bio
hawkovitiello.comΒ·20h
Issue 106βLong-Term Memory for Civilization
500words.pika.pageΒ·21h
7/11/2025, 9:13:33 AM
bsd.networkΒ·10h
Big news: we've figured out how to make a *universal* reward function that lets you apply RL to any agent with:
threadreaderapp.comΒ·1h
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
lesswrong.comΒ·18h
π² A persona-based approach to AI-assisted software development
humanwhocodes.comΒ·22h
Loading...Loading more...