Can we interpret latent reasoning using current mechanistic interpretability tools?
lesswrong.com·5d
💻Local LLMs
Preview
Report Post

Published on December 22, 2025 4:56 PM GMT

Authors: Bartosz Cywinski*, Bart Bussmann*, Arthur Conmy**, Joshua Engels**, Neel Nanda**, Senthooran Rajamanoharan**

* primary contributors
** advice and mentorship

TL;DR

We study a simple latent reasoning LLM on math tasks using standard mechanistic interpretability techniques to see whether the latent reasoning process (i.e., vector-based chain of thought) is interpretable.

Results:

  • We find that the model solves maths problems requiring three reasoning steps by storing the two intermediate values in specific latent vectors (the third and fifth of six). We established this using standard mechanistic interpretability techniques.
  • The logit lens shows …

Similar Posts

Loading similar posts...