AI Safety at the Frontier: Paper Highlights of October 2025
lesswrong.com¡23h
Flag this post

Published on November 5, 2025 1:39 PM GMT

tl;dr

Paper of the month:

Synthetic Document Finetuning creates deep, robust beliefs that withstand adversarial scrutiny, though egregiously false facts still make these beliefs detectable, as do other knowledge editing methods and narrow finetuning.

Research highlights:

  • White-box methods and response prefill enable secret knowledge extraction in small models.
  • Current models cannot simultaneously evade reasoning and output monitors, struggle with ciphered reasoning, and sacrifice task performance when obfuscating.
  • Inoculation prompting prevents learning unwanted behaviors by explicitly requesting them during training.
  • Backdoor poisoning in pretraini…

Similar Posts

Loading similar posts...