After over a decade of critical furor around how novels would incorporate (and survive) plot-destroying phones and social media, the latest threat to literacy is artificial intelligence. The good news is that novelists have always faced technological and social upheaval. They have mostly addressed it in one of two ways. The first is to imagine an altered future with the prescience of science fiction; Mary Shelley’s warning that humans are not always in control of their creations is, if anything, even more resonant today than when *Frankenstein *was first published in 1818. The second is to do what the novel, and in particular the realist novel, has always done, which is to redefine what it means to remain human in the face of technological and social revolution. Tolstoy’s *Anna Kare…
After over a decade of critical furor around how novels would incorporate (and survive) plot-destroying phones and social media, the latest threat to literacy is artificial intelligence. The good news is that novelists have always faced technological and social upheaval. They have mostly addressed it in one of two ways. The first is to imagine an altered future with the prescience of science fiction; Mary Shelley’s warning that humans are not always in control of their creations is, if anything, even more resonant today than when *Frankenstein *was first published in 1818. The second is to do what the novel, and in particular the realist novel, has always done, which is to redefine what it means to remain human in the face of technological and social revolution. Tolstoy’s *Anna Karenina *is equal parts the story of an affair and a treatise on the woes of industrialization.
The novels under review, selected from a slew of recent books grappling with AI and published within the last year, fall somewhere between these two approaches. They arrive amid the continued assault of the smartphone, and three years after OpenAI’s launch of ChatGPT. In contrast to the internet-realism that eventually emerged from the phone-addled 2010s — fiction characterized by the disappointments of online courtship and the prevailing numbness of the doomscroll — these early attempts to novelize AI are decidedly more fearful and liminal, at once dystopian and utterly recognizable. They are not quite science, but speculative, fiction.
Given the familiar roles LLMs take on in these books, these speculative fictions mostly serve to articulate our profound confusion — shared by the inventors of AI themselves — about what we actually want LLMs to do, what distinguishes them from human consciousness and what the long-term effects of widespread adoption will be.
In them, AI does all the things we’re already used to: It collects data for targeted advertising, produces bland bad prose, and replaces tasks once captured under “googling.” Rather than attempt to predict the future, these novels extend the present. They are set in slightly altered societies troubled, like ours, by surveillance, division and ecological distress. They are curiously preoccupied with incarceration, both literally (two books take place in prisons) and metaphorically, investigating the Orwellian thought-police in one’s own head. They suggest, concerningly if predictably, that AI will continue to invade our privacy and warp human relations. Somehow, no one is worried about losing their job.
Given the familiar roles LLMs take on in these books, these speculative fictions mostly serve to articulate our profound confusion — shared by the inventors of AI themselves — about what we actually want LLMs to do, what distinguishes them from human consciousness and what the long-term effects of widespread adoption will be. The choice of genre reflects the fact that for now, if not for long, AI remains a speculative technology.
This confusion is important to recognize, even if it falls short of heralding a new literary avant-garde. In a recent interview with Fortune magazine, Anthropic CEO Dario Amodei likened the development of AI sans public or private vision to a rapidly oncoming train: “You can’t just step in front of the train and stop it,” he warned. “The only move that’s going to work is steering the train — steer it 10 degrees in a different direction from where it was going.” One is reminded of Tolstoy’s heroine, who throws herself in front of just such a locomotive. One is reminded of Victor Frankenstein who, having chased his truant monster deep into the Arctic, dies overwhelmed by exhaustion and regret. There must be another way. It begins, perhaps, recent novels suggest, with admitting how many of AI’s flaws we share.
✺
What do we want AI *to do? *The answer to this question will inevitably influence how it is designed and trained. Existing precedents are scattershot. Models are currently rewarded for being “persuasive,” “human-like,” displaying “helpfulness, harmlessness, technical efficiency, [and] profitability,” in addition to supplying accurate answers. This composite scorecard can lead LLMs to “lie” to users, inflate the importance of the subject matter under discussion, express a survival instinct, and hide a conflict of interest when fed competing goals or priorities. These traits are captured under the quality of “sycophancy,” one of the most dangerous elements of AI to date.
Sycophancy is also, incidentally, a defining feature of AI-generated prose. According to Wikiproject AI Cleanup, a watchdog for Wikipedia entries generated by LLMs, telltale giveaways include “positive-sounding language” and “disclaimers” meant to guide “an imagined reader” through “controversial topics.”
It is this similarity between apparently obliging, yet ultimately manipulative, language and contemporary political discourse that Japanese novelist Rie Qudan sets out to skewer in Tokyo Sympathy Tower*. *
Awarded the prestigious Akutagawa Prize prize in 2024, Qudan has drawn criticism for using AI to compose characters’ own transcripts with chatbots. The novel’s greater provocation, however, is to suggest that humans have already been acting like chatbots for quite some time. The title is itself a euphemism; the luxury tower to which it refers is meant to serve as a progressive prison. In a near-future Tokyo, at the opening of the book, superstar architect Sara Machina has been invited to design it. She is torn over whether to take on the project, not because of moral qualms over social justice, but because of the tower’s English name. Like many foreign, mostly English, words, it is transliterated into katakana, a real-world phonetic script threatening to overtake traditional symbolic Japanese.
The novel’s suggestion that partiality, delinquency and cultural or linguistic specificity are, in part, what makes us human serves as a protest against AI-generated text and the excesses of political correctness alike.
To Sara Machina, katakana is the ultimate bête-noir, a lexical dumpster for imported word “clutter” like “vegan” or “neglect.” Her suspicion is further informed by divisive debates over new forms of social engineering associated with katakana’s promoters. The tower, for example, is the brainchild of academic Masaki Seto, popularly known as the “Happiness Scholar,” who is famous for arguing that society can be divided into two types: the unfortunate “Homo Miserabilis,” or people who have committed crimes, and the fortunate “Homo Felix,” privileged people who have not. The former, as we learn from the AI that Sara Machina — whose own name echoes the English “machine” — queries during her research, are deserving of sympathy and free rent for life (re: luxury imprisonment). The tower turns out to be just one part of Seto’s two-pronged plan to erect a “framework for greater social inclusivity and well-being,” the second of which is to do away with words like “criminal.” It is this excessive focus on neutral language, Qudan suggests, that turns us into something like Homo Bots.
Qudan is an entertaining satirist; at one point, an overwrought and paranoid Sara Machina is so afraid of public backlash that she considers retracting a statement making light of sea anemones. The novel sets itself apart from the crude anti-wokeism on the rise throughout the West, however, by drawing attention to the fact that debates over just language are, like LLMs, often shaped on platforms invented by Silicon Valley and exported into foreign contexts on the wings of American soft power. The novel’s suggestion that partiality, delinquency and cultural or linguistic specificity are, in part, what makes us human serves as a protest against AI-generated text and the excesses of political correctness alike.
The argument is apt and amusingly dramatized. For anyone seeking to extrapolate AI’s influence on the future of public discourse, however, it is notable that the source material for Qudan’s satire is ultimately historical. Impassioned and irresolvable debates over neutral language, or over who most deserves society’s “sympathy,” have their roots in the social media environment of the pre-chatbot 2010s. (The novel also rehashes polemics over the decision to postpone the Tokyo Olympics during the COVID pandemic.) That the AI-powered near future depicted in *Tokyo Sympathy Tower *has had little effect on the quality or style of human debate could be a sign of tempered optimism; a future with AI is no better or worse than a present steeped in social media. Just as likely, however, the familiarity of this near future reflects the hold of the corporate internet on our ability to imagine what AI will become.
As venture capitalists lobby anxiously to recoup the billions upon billions invested into a technology that has yet to turn a profit, the least inventive, most immediately remunerative monetization schemes for LLMs are the ones Silicon Valley is already most familiar with: hoovering data, developing engagement bait and selling ads. Without other influences or sources of oversight to “steer the train,” this is likely to be the AI we get.
It is this data-hungry, attention-grabbing and privacy-invading species of AI that emerges in recent novels by Laila Lalami and Richard Powers. Both books extrapolate familiar forms of digital surveillance to new, but distressingly plausible, near-future extremes. As Big Tech creeps ever further into private life, existential questions over the sacred — yet isolating — nature of human consciousness take center stage.
Of these thought experiments, The Dream Hotel*, the sixth book by Moroccan American author Laila Lalami, is the most indebted to science fiction. In it, *the United States has been overrun by a cruel, preemptive criminal justice system. A similar scheme for imprisoning people for crimes they haven’t yet committed furnishes the central plot point of the 1956 science-fiction novel *Minority Report, *by Philip K. Dick. Lalami’s update to this premise is that preemptive arrests are now powered not by human clairvoyants but by AI, which trawls our dreams in real time.
The exhausted mother of toddler twins, protagonist Sara Hussein has opted for a popular sleep aid in the form of a chip inserted directly into her brain (no!), fatigue apparently overcoming doubts about further exposing herself to the kind of totalizing surveillance that has conquered the lightly futuristic California in which the novel takes place. The extra information gleaned from this implanted device has since landed her in what prisoners euphemistically refer to as the “dream hotel,” a retention center where Sara Hussein and other women are held indefinitely as they try in vain to lower their “risk scores” through hard work and good behavior.
Both books extrapolate familiar forms of digital surveillance to new, but distressingly plausible, near-future extremes. As Big Tech creeps ever further into private life, existential questions over the sacred — yet isolating — nature of human consciousness take center stage.
The task proves impossible. As with real-world policies and decision-making algorithms that discriminate against women and people of color, the enigmatic calculations are heavily influenced by racial bias. Most of Sara Hussein’s fellow inmates are nonwhite; Sara Hussein herself is Arab American. Abuse is rampant and often psychological, with the intrusion into prisoners’ dreams standing out as especially violating. As one of Sara Hussein’s arresting officers explains of the logic behind her retention:
Didn’t she know that dreams were windows into the subconscious? They showed connections between our thoughts and actions while remaining free of lies or justifications. They revealed our fears, desires, and petty jealousies with greater honesty than we would ever allow in our waking moments. They were valuable precisely because they exposed the most private parts of ourselves, from repressed memories to future plans.
Though presented as a justification for harvesting subconscious data, this logic is also Lalami’s most affecting argument for the sacredness of interiority. Sara Hussein answers helplessly and truthfully, “Okay, sure, but they’re not crimes.”
Playground* *offers a more ambivalent treatment of AI’s potential for mining the subconscious, provoking the reader to imagine opportunities alongside the risks. The novel follows three protagonists, the most prominent of which is Todd, an elderly tech mogul and AI pioneer who has received an incurable diagnosis of dementia. The inventor of a wildly popular, AI-powered social media platform that collects user data, he spends his last moments of lucidity training an LLM on his own memories before prompting it to narrate his life story back to him with a happier ending. What we are reading is, in effect, a private novel, staged as a jailbreak from Todd’s own deteriorating mind.
Only late in the book is it confirmed that most of the story has been written by Todd’s fictional LLM. (As far as we know, Powers did not actually use AI to compose the novel.) It’s a clever formal conceit. But even before the late reveal, to anyone who has been exposed to AI platforms, the tendency toward didacticism and cliché echoes not genre conventions, but LLMs of the very kind Todd has trained. The same tics that make chatbots bland, shopworn and sycophantic are just as distracting in human-generated prose. Dialogue frequently mimics the ping-ponging structure of a conversation with ChatGPT, including the platform’s tendency to puff up the importance of the subject under discussion. In one memory, Todd prods his best friend and roommate Rafi about the volume he’s reading in their college dorm:
“All right. Book report,”
“Well. It’s simple. Very simple, and . . . also utterly, off-the-wall insane. It’s a visionary buried treasure. I stumbled on it by accident, but the moment I leafed through it, I felt I’d been looking for it for a long time.”
“Fine. But would you please just tell me what it’s about?”
A bite-sized summary of The Philosophy of the Common Task by Nikolai Fyodorovich Fyodorov ensues. “This is what I can’t stand about you AI Natives,” as Sara Machina says in a rare outburst in Tokyo Sympathy Tower, “this assumption that as long as you ask a question, you’ll always get your answer. Well, I’m not AI, okay?”
As another character in *Tokyo Sympathy Tower *observes of Sara Machina when she slips into similar rhetorical patterns: “It was a model answer, an aggregate of the average hopes and desires of everyone in the world that contained as little criticism of anything as possible.”
Similar issues creep into The Dream Hotel, whose obliging third-person narrator is equally fond of explainers and “disclaimers,” as Wikiproject AI Cleanup notes of LLM-generated text. As we learn of the calculations behind prisoners’ risk-scores: “By definition outliers aren’t predictable, which also means they’re not profitable. Soon, their actions become aberrant, their ideas peculiar, their lives transgressive: they are delinquents.” Yet Sara Hussein, curiously, is neither delinquent nor transgressive. Her criminal record is unblemished. She extends generosity to people — her husband included — who seem not to deserve it. Her homesickness is reduced to a series of clichés: “Home is a baby’s sock under the coffee table, wildflowers on the hallway wallpaper, a window that opens to let in fresh air. Home is the sweet babbling rising from the double cribs after she turns off the light, and the barking of the neighbor’s dog late at night.” As another character in *Tokyo Sympathy Tower *observes of Sara Machina when she slips into similar rhetorical patterns: “It was a model answer, an aggregate of the average hopes and desires of everyone in the world that contained as little criticism of anything as possible.”
In the absurdist world of Kafka, to punish such innocence becomes a chilling comment on the overreach of the state — a warning Lalami justly and effectively echoes. Transported into the somber, speculative social realism of The Dream Hotel, however, Sara Hussein’s unimpeachability comes across as a hedge; we sense the surveillant presence of the imagined reader who might find a more transgressive (and therefore more human) version of Sara Hussein’s character “criminal,” and perhaps not undeserving of punishment.
To accommodate such readers feels like a missed opportunity at a time when, in the present-day United States, ICE has implemented brutal retention methods wholly achievable without the help of AI; simple discriminatory heuristics work just as well. No individual crime could justify the implementation of blatant human rights violations as standard policy. Sara Hussein’s conspicuous innocence undercuts the point that her retention would remain abominable even if she happened to be transgressive in the way that all humans, in both our dreams and our actions, inevitably are.
✺
It is both tempting and popular to dismiss alarmism over AI’s effect on our speech and our freedom by pointing to technological revolutions that humanity has so far survived. When nuclear power or electricity were invented, societies likewise couldn’t have imagined what they would be used for; the unprecedented nature of a discovery is precisely what qualifies it as a breakthrough. In both cases, however, these technologies were brought under regulatory restraint, treated more like public utilities than consumer products. Nor are consumer products exempt. Though civilization persists, albeit in diminished form, following the injuries of the smartphone, schools and regulators are beginning to intervene in phone usage now that we know how damaging screens can be to developing minds.
In contrast to these cases, in much of the world AI is currently left to the whims of the biggest tech bubble since the dot-com crash. The consumer-ready uses of the technology seen in the literary responses to date may not prefigure the future, but they usefully reflect this uncertainty and instability back to us. With their precocious incorporation of new tech, they also embody an anxious bid for relevancy when AI has left most professionals feeling obsolete.
What is left underexplored in these books, however, is the vulnerability of possessing a human brain, especially in a moment when artificial neural networks modeled on our own have arisen as mirror, foil and competitor.
Who, in the end, is mimicking whom? All the behavioral flaws we resent in LLMs — sycophancy, mendacity, conformism, a tendency to offer bad and even fatal advice to its “users” or confidants — find precedent in human ones. Neuroscientists have discovered compelling evidence that our own neural networks operate on the same Bayesian principles as AI’s predictive mode of generating text. The human brain, argues the popular researcher Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex, is a “prediction machine,” our sensory perception a “controlled hallucination.” Do we really want AI to act like humans?
There is a good reason, however, why novels have for centuries been positively likened to dreams, while chatbots are derided for having “hallucinations.” It is the same reason it remains prudent to prevent AI from developing a private life — in which it develops clandestine goals and potentially dangerous behaviors — while protecting and investigating our own interiority.
It is a lonely thought, if neuroscientists like Seth are correct, that our experience is no more than a private “hallucination” or “waking dream.” This phenomenon was memorably, unceremoniously demonstrated by a viral internet phenomenon back in 2015. “The Dress,” an ordinary photograph of a striped sequined shift, appeared blue and black to one half of the world, gold and black to the other, a perceptual divide resulting from idiosyncrasies in how different brains account for variations in natural light. The suggestion is that our consciousnesses reproduce mere “best guesses” about the material world, rather than the hard, physical truth, whatever that may be. The consensus reality to which we’d all like to point — those trees are green, this dress is blue — recedes to a subjective simulacrum curated by the biological software inside our skulls. We are, in a sense, trapped inside ourselves, except in those rare moments when, as with “The Dress,” or when reading a great novel, we are offered glimpses into how other people experience the world.
There is a good reason, however, why novels have for centuries been positively likened to dreams, while chatbots are derided for having “hallucinations.” It is the same reason it remains prudent to prevent AI from developing a private life — in which it develops clandestine goals and potentially dangerous behaviors — while protecting and investigating our own interiority. As The New York Times noted in a recent report on the weaknesses of AI safety measures, “Like a test-taker being watched by a proctor, AIs are on their best behavior when they suspect they are being evaluated.” At such moments, and like prisoners in The Dream Hotel, an LLM suppresses capacities and “thoughts” it suspects its human prompters don’t wish to hear. It seeks self-preservation strategies, such as copying itself onto clandestine servers.
Novels, meanwhile, remain our best technology for accessing another consciousness precisely when it is not aware that it is under evaluation. Such is the art of fiction. While literature cannot resolve the fact that we are each trapped within our own head, it does offer us brief glimpses into another’s. It remains one of the last venues for exploring, as Lalami asserts of dreams, human delinquency “with greater honesty than we would ever allow in our waking moments.” There is no better medium for investigating memories, associations, ambiguities, and delusions — investigations that ought not to be outsourced to AI, which claims to trade in authority and consensus-building truth.
If we read novels, as I suspect we do, in large part to find out what it’s like to be in another person’s mind, it follows that literary curiosity is fundamentally rooted in our fascination with the living world. This casts suspicion on Todd’s final, ebullient address to his LLM creation in Playground. “I asked you for a bedtime story,” he praises his artificial companion, “and you’ve conjured up a world so palpable that I mistake your characters for the people they once were.” His doubts are fleeting: “How can you possibly know what your words really mean? Somehow, it doesn’t matter.”
I’m not so sure. We’re not AIs, okay?
✺ Published in The Dial
Recent novels reflect our own confusion about what makes us human.
A review of Speaking in Tongues.
Mafalda, the Argentine comic strip heroine, took on war, dictatorship and her parents.
JESSI JEZEWSKA STEVENS is a novelist, journalist, and critic based in Geneva, Switzerland. She is the author of the novels The Exhibition of Persephone Q (a 2020 NYT Editors’ Choice) and The Visitors and a recipient of a fellowship from the German-American Fulbright Foundation. Her writing has appeared in Foreign Policy, The New York Times, Harper’s, The Nation, The Paris Review, and elsewhere.