Bevy, ECS Architecture, Asset Pipelines, Physics Simulation
Exploration hacking: can reasoning models subvert RL?
lesswrong.com·2d
Research Areas in Evaluation and Guarantees in Reinforcement Learning (The Alignment Project by UK AISI)
lesswrong.com·13h
Research Areas in Benchmark Design and Evaluation (The Alignment Project by UK AISI)
lesswrong.com·13h
SLTarch: Towards Scalable Point-Based Neural Rendering by Taming Workload Imbalance and Memory Irregularity
arxiv.org·2d
Using Containers to Speed Up Development, to Run Integration Tests and to Teach About Distributed Systems
arxiv.org·2d
RePaCA: Leveraging Reasoning Large Language Models for Static Automated Patch Correctness Assessment
arxiv.org·1d
I am worried about near-term non-LLM AI developments
lesswrong.com·1d
Loading...Loading more...