Thinking about reasoning models made me less worried about scheming
lesswrong.com·1d
Flag this post

Published on November 20, 2025 6:20 PM GMT

Reasoning models like Deepseek r1:

  • Can reason in consequentialist ways and have vast knowledge about AI training
  • Can reason for many serial steps, with enough slack to think about takeover plans
  • Sometimes reward hack

If you had told this to my 2022 self without specifying anything else about scheming models, I might have put a non-negligible probability on such AIs scheming (i.e. strategically performing well in training in order to protect their long-term goals).

Despite this, the scratchpads of current reasoning models do not contain traces of scheming in regular training environments - even when there is no harmlessness pressure on the scratchpads like in Deepseek-r1-Zero.

In this p…

Similar Posts

Loading similar posts...