Prompting Models to Obfuscate Their CoT
lesswrong.com·23h
🛡️AI Security
Preview
Report Post

Published on December 8, 2025 9:00 PM GMT

Summary:

  • Models can sometimes obfuscate their CoT when prompted to do so on a basic reasoning task
  • We can increase the rate of obfuscation by telling the model it failed previously
  • Obfuscation doesn’t significantly decrease task performance
  • Models display interesting strategies to avoid saying “taboo” terms:
    • Minimizing reasoning: models use fewer thinking tokens and answer directly
    • Self-censorship: models modify their CoTs to avoid the taboo terms
  • Some takeaways:
    • Models have some basic ability to control their CoT. We are excited about future work studying how general this ability is and how it scales with model capability. 
    • Models can sometimes re…

Similar Posts

Loading similar posts...