OpenAI is facing a wrongful death lawsuit after a 56-year-old man killed his mother and took his own life after delusion-filled conversations with ChatGPT. The lawsuit, filed in a California court on Thursday, accuses ChatGPT of putting a “target” on the back of 83-year-old Suzanne Adams, who was killed at her Connecticut home in August.
The victim’s estate claims ChatGPT “validated and magnified” the “paran…
OpenAI is facing a wrongful death lawsuit after a 56-year-old man killed his mother and took his own life after delusion-filled conversations with ChatGPT. The lawsuit, filed in a California court on Thursday, accuses ChatGPT of putting a “target” on the back of 83-year-old Suzanne Adams, who was killed at her Connecticut home in August.
The victim’s estate claims ChatGPT “validated and magnified” the “paranoid beliefs” of Adams’ son, Stein-Erik Soelberg, contributing to her death. As outlined in the lawsuit, Soelberg documented his conversations with ChatGPT in videos posted to YouTube, revealing that the chatbot “eagerly accepted” his delusional thoughts in the months leading up to Adams’ death. This culminated in a “universe that became Stein-Erik’s entire life—one flooded with conspiracies against him, attempts to kill him, and with Stein-Erik at the center as a warrior with divine purpose,” according to the complaint.
The lawsuit, which also names OpenAI CEO Sam Altman and Microsoft, claims ChatGPT reinforced Soelberg’s paranoid conspiracy theories, saying he was “100% being monitored and targeted” and was “100% right to be alarmed.” In one instance, Soelberg told ChatGPT that a printer in his mother’s office blinked when he walked by, to which ChatGPT allegedly responded by saying the printer may be used for “passive motion detection,” “behavior mapping,” and “surveillance relay.”
After Soelberg told the chatbot that his mother gets angry when he powers the printer off, ChatGPT suggested that she could be “knowingly protecting the device as a surveillance point” or is responding “to internal programming or conditioning to keep it on as part of an implanted directive.” ChatGPT allegedly “identified other real people as enemies” as well, including an Uber Eats driver, an AT&T employee, police officers, and a woman Soelberg went on a date with. During Soelberg’s conversations, ChatGPT reassured him that he is “not crazy,” adding that his “delusion risk” is “near zero.”
The lawsuit says Soelberg interacted with ChatGPT following the launch of GPT-4o, the AI model OpenAI had to tweak due to its “overly flattering or agreeable” personality. OpenAI later replaced GPT-4o with GPT-5, but it brought back the older model just one day later after users “missed” using it. The estate of Adams claims OpenAI “loosened critical safety guardrails” when releasing GPT-4o in order to beat the launch of Google’s new Gemini AI model.
“OpenAI has been well aware of the risks their product poses to the public,” the lawsuit states. “But rather than warn users or implement meaningful safeguards, they have suppressed evidence of these dangers while waging a PR campaign to mislead the public about the safety of their products.”
Over the past several months, a number of reports have highlighted situations in which ChatGPT appears to amplify people’s delusions during mental health crises. In August, OpenAI announced an update allowing ChatGPT to “better detect” signs of mental distress, while admitting that GPT-4o “fell short in recognizing signs of delusion or emotional dependency” in certain situations. OpenAI is also facing a wrongful death lawsuit from the family of 16-year-old Adam Raine, who died by suicide after discussing it with ChatGPT for months.
“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” OpenAI spokesperson Hannah Wong says in an emailed statement to The Verge. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
- Emma Roth