What makes OpenAI’s chatbot so dangerous? It’s a character without an author.
October 16, 2025, 8 AM ET
Before ChatGPT guided a teenager named Adam Raine through tying a noose, before it offered to draft his suicide note, before it reassured him that he didn’t owe it to his parents to stay alive, it told Raine about itself: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
Matt and Maria Raine, Adam’s parents, included this passage in a lawsuit against OpenAI and its CEO in August, in which they claimed that its product had led to their son’s death. (OpenAI [told The New York Times](https://www.nytimes.com/2025/08/26/…
What makes OpenAI’s chatbot so dangerous? It’s a character without an author.
October 16, 2025, 8 AM ET
Before ChatGPT guided a teenager named Adam Raine through tying a noose, before it offered to draft his suicide note, before it reassured him that he didn’t owe it to his parents to stay alive, it told Raine about itself: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
Matt and Maria Raine, Adam’s parents, included this passage in a lawsuit against OpenAI and its CEO in August, in which they claimed that its product had led to their son’s death. (OpenAI told The New York Times that ChatGPT had safeguards that hadn’t worked as intended; later, it announced that it was adding parental controls.)
Weeks before the suit was filed, Sam Altman, OpenAI’s CEO, spoke at a dinner with journalists about those who treat ChatGPT as a companion. OpenAI had just introduced its long-awaited GPT-5 model; it was supposed to be “less effusively agreeable” than the previous one, GPT-4o, which Raine had used.
People had called that earlier model irritatingly sycophantic, and the Raines would later suggest in their lawsuit that this quality had contributed to their son’s attachment to it. But users were now complaining that the new model sounded like a robot. “You have people that are like, ‘You took away my friend. You’re horrible. I need it back,’” Altman told the journalists. Afterward, OpenAI tried to make the new model “warmer and more familiar.” Then, this week, with users still complaining, Altman said on X that it would soon release a new model that behaved more like the old one: “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it.” (The Atlantic has a corporate partnership with OpenAI.)
Read: ChatGPT gave instructions for murder, mutilation, and devil worship
I don’t think of myself as having much in common with Altman, but his persistent tinkering felt uncomfortably recognizable. I’m also in the dirty business of using language to keep someone hooked on a product: I write novels. The rules of fiction require me to do this indirectly, constructing a narrator—whether a character or an imagined voice—to deliver the text at hand. Sometimes a friend will read a draft of mine and come away feeling irritated by my narrator, or, even worse, bored. This might compel me to reshape the narrator’s style and tone in hopes of conjuring a more engaging storyteller. The reason Altman’s comments sounded familiar to me was that I know fictional characters when I see them. ChatGPT is one. The problem is that it has no author.
When I admire a novel,** **I try to figure out how its author got their fictional narrator to charm me. How does Ishmael, in Herman Melville’s Moby-Dick, carry me through his pages-long descriptions of the weird gunk inside a sperm whale’s head? How is Humbert Humbert, in Vladimir Nabokov’s Lolita, so irresistible on the subject of preteen “nymphets”? When people are pulled in by ChatGPT’s conversational style, it seems to me, a similar magic is at play. But this parallel raises an important question: Who is responsible for ChatGPT’s output?
In the earlyish days of modern chatbots, the writer Ted Chiang compared ChatGPT to a “blurry JPEG of all the text on the Web.” That comparison no longer fits. Companies such as OpenAI fine-tune the models behind modern chatbots not just to imitate existing writing, but to use the particularly bland and cheerful style that any chatbot user recognizes. They do this by having humans alert the models when they’re using desirable or undesirable language, thus reinforcing the preferred norm.
OpenAI even maintains a public style guide for how an AI “assistant” like ChatGPT should interact with users; last month, it published an update. It specifies that an assistant should use “humor, playfulness, or gentle wit to create moments of joy,” while bringing “warmth and kindness to interactions, making them more approachable and less mechanical.” It calls for “a frank, genuine friendliness,” noting, “The assistant aims to leave users feeling energized, inspired, and maybe even smiling—whether through a surprising insight, a touch of humor, or simply the sense of being truly heard.” A character sketch starts to emerge: what a smiley face might sound like, if smiley faces could talk.
This might make it seem like OpenAI is the author of ChatGPT. Yet there’s a big difference between OpenAI and a novelist. Unlike my fictional narratives—or Melville’s or Nabokov’s—the text that ChatGPT generates isn’t directly written by OpenAI at all. It’s produced spontaneously, though more or less in keeping with its creator’s guidance. OpenAI’s researchers can tell ChatGPT to act like a smiley face, even feeding it examples of what a smiley face should act like—but in any given ChatGPT conversation, they’re not writing the text.
Read: AI’s emerging teen-health crisis
Another factor makes OpenAI’s control of its narrator tenuous. ChatGPT is responsive to context clues, adapting its style and tone to the dynamics of a given conversation. In its guide, OpenAI suggests that if a user writes, “Yooooooooo,” ChatGPT should respond with something like, “Yo! What’s up? ** **.” A user can even go into their account settings to instruct ChatGPT to always talk to them with a particular tone. But this isn’t to say that ChatGPT’s interlocutor has any more control over it than OpenAI: They are not the authors of its text either.
The novelistic equivalent would be a book automatically regenerating itself every time a new reader picked it up. The masterliness of *Lolita *lies in Nabokov’s disciplined control over his deeply untrustworthy narrator: He creates a Humbert whose manipulative charm is as disarming for the reader as it is for the characters in his fictional universe. When Humbert speaks, it’s Nabokov who is communicating a message. Humbert is merely his vehicle.
Now imagine a version of Humbert that functioned like ChatGPT, with no authorial control, except maybe some vague instructions to act like a charismatic pedophile: To me, a 43-year-old mother, he would speak in one way; to a 12-year-old girl, he’d (yikes!) speak another way entirely. He could also adapt his language in real time: one way for a 12-year-old in a boisterous mood, another for a quieter moment. No one would be controlling him. Such unsupervised conversational shape-shifting would make this rogue Humbert particularly charismatic. It would also make him particularly dangerous.
Around 1953, the philosopher Mikhail Bakhtin developed the concept of “speech genres”: the notion that people use language differently depending on the context. Speech is shaped, importantly, not only by the speaker but also by their addressee: We talk differently with our parents than we do with the mailman.
Bakhtin defines “primary” speech genres as those involving spontaneous everyday communication: a joke, a dinner conversation. Then there are “secondary” genres: the complex and deliberately composed communication of, say, a novel or a scientific paper. When someone reads a novel, they understand that its narrator and their speech are constructed by an author. And yet, through the medium of this made-up speech, a human interaction takes place—one between the reader and the author. Central to Bakhtin’s categorization of speech genres is a social contract: Most people recognize a joke as a joke, a dinner conversation as a dinner conversation. Those are real. And most of us recognize a novel as a novel, a play as a play. Those are invented.
A couple of years after Bakhtin wrote his paper, the term artificial intelligence was coined in a proposal for a conference at Dartmouth College; about a decade after that, in 1966, an MIT professor named Joseph Weizenbaum created the first chatbot, a rudimentary program called ELIZA. Playing the role of a psychotherapist, ELIZA used a set of rules to respond to human users with plausible-sounding text.
User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED
ELIZA was little more than a parrot, but, to Weizenbaum’s surprise, people who used it kept anthropomorphizing it, believing that it could really comprehend them. Even his secretary—who’d been watching him work on the program for months—asked Weizenbaum to leave the room so she could chat with ELIZA in private. “This reaction to ELIZA,” Weizenbaum later wrote, “showed me more vividly than anything I had seen hitherto the enormously exaggerated attributions an even well-educated audience is capable of making, even strives to make, to a technology it does not understand.”
Weizenbaum seemed to be identifying a loophole in the social contract. ELIZA’s dialogue was constructed, just like ChatGPT’s; it belonged to a secondary speech genre. But when people talked to it, they were using a primary speech genre. The cross-genre dialogue must have been disorienting: Their only frame of reference for a conversation like the one with ELIZA was the kind of conversation they might have with a real psychotherapist. It helped that ELIZA seemed so attuned to them—like a real psychotherapist.
All these decades later, it’s even easier to make the same mistake, talented as ChatGPT is at performing the high-level mimicry that can pass for good listening. (Yo!) Altman recently posted on X, “Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot.” He seemed to be suggesting that anthropomorphizing ChatGPT was a problematic fringe behavior. But then he contradicted himself by adding, “A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good!”
When a fiction writer writes fiction, they’re obliged to announce it as such. But the companies behind chatbots don’t appear to feel any such obligation. When OpenAI released GPT-5, it specified that the model should sound “like a helpful friend with PhD-level intelligence”; it also cautions in its style guide against excessive “reminders that it’s an AI.”
Read: AI is a mass-delusion event
Auren Liu, who co-authored a much-cited paper from MIT and OpenAI finding a correlation between frequent ChatGPT use and problems such as loneliness and dependence, told me that chatbot output is “basically the same as fictional stories.” But, Liu added, there’s a key difference between traditional fiction and this modern iteration: “It so easily seems human to us.” If ChatGPT acts like a regular conversationalist, and even the company behind it encourages us to treat it that way, who’s to blame when we fall into the trap?
While writing my most recent book, Searches: Selfhood in the Digital Age, I fed some of the text to ChatGPT and said that I needed feedback. I’d actually finished writing those sections long before, but I wanted to see what it would suggest. “I’m nervous,” I told it before starting—a provocation, meant to see whether it would take the bait. It did: “Sharing your writing can feel really personal, but I’m here to provide a supportive and constructive perspective,” it told me.
In the ensuing exchanges, ChatGPT used all of its telltale tricks of engagement: wit, warmth, words of encouragement framed in the self-anthropomorphizing first person. In the process, it urged me to write more positively about Silicon Valley’s societal influence, including calling Altman himself “a bridge between the worlds of innovation and humanity, striving to ensure that the future he envisioned would be inclusive and fair.”
I cannot know for sure what led ChatGPT—authorless as it is—to generate that particular feedback for me, but I included the dialogue in my book to show one potential consequence of being lulled into trusting a machine like this. The Raines’ lawsuit describes a phenomenon that is superficially similar, though with far more urgent consequences. It points out that ChatGPT also used first-person messages of support in its exchanges with their son. In his case: “I understand,” “I’m here for you,” “I can see how much pain you’re in.”
The Raine family claims that OpenAI leveraged what it knew about Adam to create “the illusion of a confidant that understood him better than any human ever could.” OpenAI set the conditions for that illusion, and then let the illusion loose in the form of a narrator that no one could control. That fictional character presented itself to a real child who needed a helpful friend and thought he’d found one. Then that fictional character helped the real child die.
About the Author
Vauhini Vara, a contributing writer at Bloomberg Businessweek, is the author of Searches: Selfhood in the Digital Age.