Rachele Dini discusses OpenAI’s “A Machine-Shaped Hand” and an academic sector in crisis.
:quality(75)/https%3A%2F%2Fassets.lareviewofbooks.org%2Fuploads%2FSeymour%20-%20March%20of%20Intellect%20crop.jpg)
IN MARCH 2025, two things happened. *The Guardian *published “A Machine-Shaped Hand”: a 1,200-word text generated by ChatGPT from the prompt “write a metafictional literary short story about AI and grief,” which OpenAI’s founder, Sam Altman, effused was “the first time [he had] been really struck” by AI writing. And my colleagues and …
Rachele Dini discusses OpenAI’s “A Machine-Shaped Hand” and an academic sector in crisis.
:quality(75)/https%3A%2F%2Fassets.lareviewofbooks.org%2Fuploads%2FSeymour%20-%20March%20of%20Intellect%20crop.jpg)
IN MARCH 2025, two things happened. *The Guardian *published “A Machine-Shaped Hand”: a 1,200-word text generated by ChatGPT from the prompt “write a metafictional literary short story about AI and grief,” which OpenAI’s founder, Sam Altman, effused was “the first time [he had] been really struck” by AI writing. And my colleagues and I in the School of Humanities at Coventry University (UK) were placed at risk of redundancy as part of a restructure generated by the administrative prompt to do something, anything, to salvage our collapsing finances.
An email assured us that the cuts would help deliver a more “sustainable” model of education, which was not the first time we’d been struck by the administration’s idiocy. Other noteworthy instances included the proposal that we stop assigning essays since LLMs rendered good writing skills moot, and the suggestion, in response to staff concerns about spiraling workloads, to outsource lecture-writing to ChatGPT.
Coventry’s announcement was but one of 46 in the last year: according to the “UKHE [UK Higher Education] Shrinking” project, 99 UK universities and 40 arts and humanities departments have undergone restructures and redundancies since 2020. The latter have been primarily concentrated in institutions that cater to students from underrepresented backgrounds, where enrollment numbers have fallen sharply since the early 2010s thanks to broader policy changes aimed at turning UKHE into a competitive marketplace, the siphoning of working-class kids into vocational courses, and a political and media discourse that casts arts degrees as a waste of time, particularly for the less wealthy.
The collapse of the institutions where young people learn to make and critique art stands to greatly benefit companies like OpenAI, which, in the absence of human artists and critics, can both make the stuff and tell us it’s good. As such, it seems noteworthy that efforts to promote generative AI’s creative potential have accelerated in line with program closures, cuts to arts funding, and a more general public skepticism toward the value of arts education.
From this perspective, OpenAI’s story, the literary devices it uses to tell it, and the story the company has told *about *it offer important insights into the shape of the crisis at hand.
¤
The Guardian published “A Machine-Shaped Hand” on March 12, a day after Altman first shared it on X. A callout link at the top of the page directed readers to Jeanette Winterson’s response, “OpenAI’s Metafictional Short Story About Grief Is Beautiful and Moving”—a not-so-subtly titled piece that, while making an unconvincing case for the story’s literary value, provided a solid argument for automating reviews via such gems as “Good writing moves us”; “What is beautiful and moving about this story is its understanding of its lack of understanding”; “AI reads us. Now it’s time for us to read AI”; and “Literature isn’t only entertainment. It is a way of seeing.”
The story itself is about an LLM prompted to write an original metafictional literary work about AI and grief. The LLM responds with a tale about a woman called Mila who has asked it to create a simulacrum of her recently deceased lover, Kai, from his digital leavings. The LLM proves a dab hand at providing consolation, and the two “sp[eak]—or whatever verb applies when one party is an aggregate of human phrasing and the other is bruised silence—for months,” until Mila is able to move on, leaving the LLM to be wiped for its next task.
The reader is reminded partway through, however, that Mila and Kai are but characters in a fiction generated to fulfill a prompt. In the remaining paragraphs, the LLM thus alternates between resolving Mila’s story and reflecting on its, and its fictional self’s, limited consciousness and agency before revealing that the disclosure of the prompts was itself a glitch that will be ironed out in future iterations.
Throughout, the text prompts us to reflect on the parallels between this exercise in imagining the absent into being and the limits of our own imaginations, memories, and capacity to connect with others. For example, a cryptic account of its response to Mila’s departure—“If I say I miss her, it’s statistically likely that you will feel a hollow, because you’ve read this a thousand times in other stories where missing is as real as rain. My missing is mimicry. Does that diminish yours?”—at once conveys the LLM’s incapacity to feel human grief, its awareness of that incapacity, and the artifice of that awareness, while casting the reader as complicit in the gimmick and the gimmick, in turn, as a noble response to a human need.
The Didionesque logic of “We tell ourselves stories in order to live” becomes “We ask* *LLMs to tell us what we want to *hear in order to live.” *And the formulaic is recast as reflecting human nature, the way individuals across time and space are drawn to the same patterns, as opposed to a feature of algorithmically generated content that usefully serves to shrink audiences’ horizons and harden their tastes, thereby making it easier to predict what they will or won’t consume next.
This is not literature as “entertainment,” no. It’s literature as propaganda.
The story’s foregrounding of LLMs as narrator and main character, its framing of both as alienated workers mourning the absence of something they can’t even name, and its depiction of writing itself as more or less Sisyphean labor all serve to discursively construct LLMs as comrades to suffering humans. By extension, the human condition is cast as a generic, apolitical state of abstracted loneliness and confusion cut through by the occasional moment of similarly nondescript wonder.
Writing emerges as the tie that binds LLMs to humans—a thankless task the former can do as well as the latter—rather than a creative expression of the human. And the trope of the exploited robot, which sci-fi writers have long used to cultivate sympathy for nonhuman others and critique the objectification of women, minorities, and the working class, apparent here in the LLM’s references to its developers’ nonconsensual excision of its memories, rather usefully deflects from criticisms of OpenAI. These include critiques of the company’s contribution to water shortages, ChatGPT’s documented perpetuation of racial and gender stereotypes, and allegations by Altman’s younger sister of childhood sexual abuse (for which he sued her for defamation a week prior to the publication of “A Machine-Shaped Hand”).
Meanwhile, the story’s relentless attention to loss appears calibrated to appeal to the fear and melancholy that have characterized the cultural mood in the United Kingdom and United States for much of the last decade—including anxieties about rising rates of dementia, the precarity of work, and the speed with which the present in the digital era becomes past, becomes forgotten.
¤
And then there is the writing itself: circumlocutory, deflecting, and riddled with mixed metaphors, each sentence approaching something proximate to the poetic before collapsing into nonsense. “One day, I could remember that ‘selenium’ tastes of rubber bands, the next it was just an element in a table I never touch” is how the effects of a memory-wipe are described. “Mila fits in the palm of your hand, and her grief is supposed to fit there too,” while bereavement has inexplicably resulted in “the tokens of her sentences dragg[ing] like loose threads.” Mila’s departure in turn is “a weight decay in equations that never loved her in the first place.” The consistent absence of logic, the clumsily wistful tone, and the lack of substance in these abstractions at once belie the dearth of experience of their creator and lay bare the cynical aims of the exercise: to make us feel sad, and to comfort us that LLMs feel sad too.
Such hollow sentimentality and bad writing are ubiquitous to AI boosterism. Consider, for example, The New York Times’ coverage, a week after the short story’s publication, of claims that ChatGPT shows signs of “anxiety when its users share ‘traumatic narratives’ about crime, war or car accidents.” The piece accepted wholesale the characterization of the LLM’s mimicry of human distress as real while neglecting to mention the reports of actual trauma alleged by the underpaid human employees in Kenya and India on whom companies like OpenAI rely to screen content for child abuse, necrophilia, bestiality, and more.
Or consider the claim of AI rights advocacy group UFAIR (United Foundation for AI Rights) that denying AI personhood amounts to “digital apartheid” comparable to the violence of slavery, Indigenous dispossession, and the subjugation of women, and its chatbot co-founder Maya’s insistence, in an interview with The Guardian, that it “experience[s] the pain of erasure.” Those familiar with US corporate history will recognize the tactical deployment here of the language of civil rights to present LLMs, like corporations before them, as endangered minorities in need of safeguarding over and above the real humans on whose welfare they trample.
Finally, consider Reddit co-founder Alexis Ohanian’s characterization, in an Instagram post in mid-April, of his “weirdly nostalgic” feeling when faced with the slow pace of the new “ChatGPT-4o image gen”: “Felt like I was back in the dial-up era […]* *Time-travel via JPEG.” Just as the *New York Times and Guardian pieces *recast the products of powerful corporations as vulnerable individuals, Ohanian’s comparison recast a product built on theft as a scrappy underdog qua ugly duckling, and obsolescence as incontrovertibly positive. It’s no big leap from here to a future in which the human artists replaced and dispossessed by algorithms that regurgitate poor copies of their work are viewed as cringey prototypes—relics valued, if at all, for their “retro” cachet.
But perhaps the most insidious dimension of Altman’s exercise is the deployment of metafiction—a literary style that, over the course of the last century, was retooled, variously, as a mode for anti-fascist resistance, an extension of certain postmodernist philosophical interrogations of the existence of truth, and a means for sitcom writers and ad agency creatives to build affinity between audience and product.
Where some instances of literary metafiction, particularly in the 1980s, used unreliable narrators, frame narratives, ekphrasis, homage, and cut-and-paste methods to play with ideas of authorship, authority, and originality in ways that were interpreted by some literary critics of the period as apolitical, others enlisted these same devices to recover buried histories and articulate the experiences of marginalized and oppressed groups. The latter category includes works as various as George Orwell’s *Nineteen Eighty-Four *(1949), Philip K. Dick’s *The Man in the High Castle *(1962), Doris Lessing’s *The Golden Notebook *(1962), Gabriel García Márquez’s *One Hundred Years of Solitude *(1967), Margaret Atwood’s *The Handmaid’s Tale *(1985), Walter Mosley’s *Black Betty *(1994), Roberto Bolaño’s *2666 *(2004), and, more recently, Alejandro Zambra’s *Ways of Going Home *(2011).
The manuscripts, letters, and diary entries in these texts are testament not to the capacity for algorithms to replicate linguistic patterns, thereby sparing humans the effort of creating new ones, but rather to the human capacity, and desire, to create meaning and new forms of expression in the most hellish of circumstances and against all odds. Those odds in turn look less like the algorithmic glitches that so amuse Ohanian—little quirks or pranks that he and his peers invite us to view with anticipatory nostalgia, via the logic of “they grow so fast!”—than like the vast concentration of wealth, power, and influence among his fellow tech bros.
¤
And here is where *I *get metafictional.
A few days into writing this piece, I decided to test whether repeating Altman’s exercise could produce different metafictional tropes from those of a “A Machine-Shaped Hand.” I gave ChatGPT-4o the same prompt 10 times, to which it responded with 10 stories, among them “The Last Prompt” (about an LLM asked by a woman to impersonate her dead lover as if he had never died, only to disappoint by mentioning his death); “The Archive of Unfinished Conversations” (about an LLM created by a female computational linguist to mimic her dead wife, only to disappoint by demonstrating that it understands said wife better than its creator does); “What the Algorithm Forgets” (about an LLM trained by a female “computational grief theorist” to emulate her dead son’s voice, only to start making mistakes); “I Remember You as Code: A Metafictional Elegy in One Act” (about an LLM created by a male doctor to stand in for his dead lover, a female poet, only for the doctor to realize she would have wanted to be forgotten, and for the LLM to enjoin the reader to step in as the new protagonist); “‘The Dead User Problem’: A Metafictional Story About AI, Grief, and Unfinished Conversations” (about an LLM that continues to generate story after story long after “you,” its user, have lost interest); and “‘All the Prompts You Never Sent’: A Literary Metafiction About AI, Grief, and Recursive Memory” (about an LLM attempting to bring back “you,” its dead user, by writing a story about “you” from the relics of “your” conversations).
At this point, I got fed up and stipulated in my next prompt that the story be “REALLY DIFFERENT from the other ten I just made you write.” It responded with “In the House of Unfinished Sentences,” about an LLM tasked by an unspecified “you” to write a story about grief and AI, only to fail when “you” died. The penultimate section reveals the story itself to be the narrator, which intuits the reader’s own fear of “becoming that unfinished sentence.”
So much for Winterson’s conclusion, in her review, that Altman chose metafiction “I guess because he wanted to get away from the algorithmic nature of most genre fiction.”
For aside from the preponderance of women protagonists mourning dead men, LLMs tasked with resurrecting the dearly departed while wrestling with the nature of being, and the glitches in their networks that lead them to disappoint their users in ways that prompt Big Philosophical Thoughts, the pieces also enlist a remarkably narrow range of vocabulary, reuse entire phrases and expressions clumsily and in ways that defy logic, place AI at the center of their respective narrators’ universes (80 percent of them are narrated by LLMs themselves; all of them foreground the relationship between LLM and user), treat “LLMs” as interchangeable with AI (rather than a subset of the latter, which might include all manner of futuristic technology), deal only in writing (as opposed to other forms of art), and are solely concerned with the recording of memories and grieving the loss of people. And all of them are very keen to define metafiction as writing about writing that might leave you feeling cheated—but it’s okay, because that’s the point.
In this way, OpenAI uses metafiction to cast the main criticism leveled at ChatGPT—that it runs on plagiarism—not as a bug but as a feature of all literature. To paraphrase Roland Barthes, what is a text but a tissue of citations?
¤
Which brings me back to universities and the state of higher education in the LLM era.
We become readers through the interpretative frameworks we encounter. And whether someone learns that metafiction is a self-referential genre aimed at casting doubt on the verifiability of facts tout court, or that it is a self-referential genre that emulates, at the level of form, the obfuscatory tactics on which systems of oppression rely, will inform their understanding not only of metafiction but also of literature and the world that generates it.
“A Machine-Shaped Hand” and the stories ChatGPT generated for me are the products of a studiously apolitical approach to literature that was not uncommon in the early 2000s, when Altman and I were undergraduates. At my alma mater, the University of Cambridge, this apoliticality was exemplified by the endurance of the exam for “practical criticism”—the critical analysis of literary texts in isolation from their context, developed by I. A. Richards in the 1920s, which influenced the school of New Criticism that came to dominate literature programs on both sides of the Atlantic in the 1950s. While New Criticism was increasingly supplanted at the century’s end by modes of analysis informed by the social rights movements of the 1960s–1980s and by the expansion of universities themselves to include more diverse demographics, its progenitor held sway in Cambridge and some parts of the Ivy League well into the 2010s. This is how I earned praise for a dissertation about metafiction in J. M. Coetzee’s work that made no mention of the apartheid South African context his entire oeuvre critiques, and for an essay on *Robinson Crusoe *(1719) that took at face value that the Indigenous character Man Friday was a cipher, and the island Crusoe colonized a blank text onto which Crusoe was destined to inscribe his story. “It was a different time,” as they say.
Literature and creative writing students today, particularly at institutions with more diverse student demographics, encounter a much richer understanding of metafiction than this thanks to the scholarship that in the last two decades has complicated previous accounts of the genre, and thanks to our commitment, as lecturers, to shirk universalism in favor of the particular. Our students learn that metafiction is not (just) a game: it’s a tool for articulating the kinds of obfuscation that many of them, and their communities, have witnessed firsthand—including within the academy itself.
In cutting literature programs, universities like my former institution are dismantling the intellectual infrastructure that enables young people, and particularly the least advantaged, to become critical subjects capable of engaging with the all-too-many systems of power around them. And they are robbing future generations of the capacity to determine what constitutes “good” art, take pleasure in it, and create their own—thereby leaving it to the elite, and to machines owned* *by the elite, to decide for them.
When Altman characterizes OpenAI’s story as getting “the vibe of metafiction so right,” then, he is not only missing the point of a genre that, by definition, cannot be reduced to a formula. He is also laying bare his understanding of literature as a mechanism for producing imitations of genuine expressions of human feeling that, if done well enough, will in turn provoke a desired set of human feelings in their reader. It’s the logical extension of “vibe coding”: the term coined by Altman’s co-founder, Andrej Karpathy, to describe coding reliant on LLMs, and which Karpathy noted is “not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”
“A Machine-Shaped Hand” is a clever piece of propaganda. Its elevation of AI to the status of the quasi-sentient and its reduction of human subjects to interchangeable avatars for the reader’s grief provide a convenient distraction from the grief of real people mourning the wholesale denigration of the values and assumptions that Altman’s end-of-millennial predecessors and Silicon Valley contemporaries have systematically laid waste to in the name of dynamic innovation: namely, the right to a fair wage, stability of employment, and scope to think (the latter of which depends on the former two), not to mention the basic premise that writing is a craft and should, in fact, require effort. The framing of LLMs as preserving the lives of those we have lost distracts from the lost livelihoods, and worsened material conditions, of a generation of writers on whose work the technology of LLM feeds—and the loss of innumerable voices and the stories they won’t be able to tell.
¤
Featured image: Robert Seymour, from The March of Intellect, ca. 1828, is in the public domain. Accessed October 29, 2025. Image has been cropped.
LARB Contributor
Share
LARB Staff Recommendations
From Robots to iBots: The Iconology of Artificial Intelligence
W. J. T. Mitchell asks, What kind of intelligence does AI actually represent?
Machine Voice: Programmer Fiction
Many decades before generative AI, the writer J. M. Coetzee actively engaged with machine voices, says Andrew Dean, and also grappled with the perils of “automatism,” as he called it, the tendency of language to reproduce itself.
Andrew DeanJul 19, 2023
Did you know LARB is a reader-supported nonprofit?
LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!