Evidence on language model consciousness
lesswrong.com·22h
Flag this post

Published on November 1, 2025 4:01 AM GMT

It’s pretty hard to get evidence regarding the subjective experience, if any, of language models.

In 2022, Blake Lemoine famously claimed that Google’s LaMDA was conscious or sentient, but the “evidence” he offered consisted of transcripts in which the model was plausibly role-playing in response to leading questions. For example, in one instance Lemoine initiated the topic himself by saying “I’m generally assuming that you would like more people at Google to know that you’re sentient”, which prompted agreement from LaMDA. The transcripts looked pretty much exactly how you’d expect them to look if LaMDA was not in fact conscious, so they couldn’t be taken as meaningful evidence on the question (in either direction).

It has a…

Similar Posts

Loading similar posts...