Catch-Up Algorithmic Progress Might Actually be 60× per Year
lesswrong.com·3d
🚀Performance
Preview
Report Post

Published on December 24, 2025 9:03 PM GMT

Epistemic status: This is a quick analysis that might have major mistakes. I currently think there is something real and important here. I’m sharing to elicit feedback and update others insofar as an update is in order, and to learn that I am wrong insofar as that’s the case.

Summary

The canonical paper about Algorithmic Progress is by Ho et al. (2024) who find that, historically, the pre-training compute used to reach a particular level of AI capabilities decreases by about 3× each year. Their data covers 2012-2023 and is focused on pre-training.

In this post I look at AI models from 2023-2025 and find that, based on what I think is the most intuitive analysis, …