Abstract
This essay argues that large language models (LLMs), such as GPT-type transformer architectures, actualize Jacques Derrida’s concept of différance. Originally introduced within the context of poststructuralist theory and semiotics, différance designates the way meaning is produced through a system of differences and deferrals, rather than stable reference. Drawing on this framework, the essay examines how LLMs generate meaningful content by calculating statistical differences across massive textual corpora—foregrounding processes of spacing, temporalization, and trace. It proposes that LLMs can be understood as “différance engines” that computationally enact the very mechanisms Derrida theorized. In addition to tracing these points of intersection, the essay refl…
Abstract
This essay argues that large language models (LLMs), such as GPT-type transformer architectures, actualize Jacques Derrida’s concept of différance. Originally introduced within the context of poststructuralist theory and semiotics, différance designates the way meaning is produced through a system of differences and deferrals, rather than stable reference. Drawing on this framework, the essay examines how LLMs generate meaningful content by calculating statistical differences across massive textual corpora—foregrounding processes of spacing, temporalization, and trace. It proposes that LLMs can be understood as “différance engines” that computationally enact the very mechanisms Derrida theorized. In addition to tracing these points of intersection, the essay reflects on the philosophical consequences of this alignment, including challenges to logocentrism, authorship, and the metaphysics of presence. It then addresses three potential criticisms of this approach, arguing that the use of Derrida’s work in this context is not a misappropriation, but a continuation and reiteration of its logic. And it concludes by identifying three systemic limitations and by charting opportunities for future research in this domain. The essay thus shows, on the one hand, how LLMs can be read through poststructuralist theory, and on the other, how poststructuralist theory can be clarified and rendered accessible through the technical operations of contemporary AI.
Access this article
Subscribe and save
Springer+
from $39.99 /Month
- Starting from 10 chapters or articles per month
- Access and download chapters and articles from more than 300k books and 2,500 journals
- Cancel anytime
Buy Now
Price excludes VAT (USA) Tax calculation will be finalised during checkout.
Instant access to the full article PDF.
Similar content being viewed by others

Data availability
No datasets were generated or analysed during the current study.
Notes
This did, as one might anticipate, create some trouble for Derrida in the course of delivering the Différence lecture to the French Society of Philosophy. Since the “discrete graphical intervention” that distinguishes différence from différance cannot be spoken or made audible, Derrida provided his audience with the following warning: “In effect, I cannot let you know through my discourse, through the speech being addressed at this moment to the French Society of Philosophy, what difference I am talking about when I talk about it. I can speak of this graphic difference only through a very indirect discourse on writing, and on the condition that I specify, each time, whether I am referring to difference with an ‘e’ or différance with an ‘a.’ Which will not simplify things today, and will give us all, you and me, a great deal of trouble, if, at least, we wish to understand each other” (Derrida 1982, 4). Peter Salmon (2021, 78), in his well-received biograph of Derrida, adds that subsequent generations of scholars, especially in the English-speaking world, have, for better or worse, often tried to make this difference audible through a deliberate (and unfortunately often clumsy) affectation that seeks to “sound hyper French when pronouncing it.” 1.
What is crucial in this context is a quotation taken from Saussure’s posthumously published Course on General Linguistics and mobilized by Derrida in a number of different texts: “Everything that has been said up to this point boils down to this: in language there are only differences. Even more important: a difference generally implies positive terms between which the difference is set up; but in language there are only differences without positive terms” (Saussure 1959, 117). 1.
The term simulation admits of a number of different and not necessarily compatible definitions. Etymologically, the verb simulate, from the Latin simulare, indicates “to copy,” “to imitate,” or “to feign.” In computer science and related disciplines, the nominal form of the word simulation refers to using computer software to mimic the behavior of a real-world system or process. It involves creating a computer model that represents a system and then running the model to observe its behavior under different conditions. There is also a specific denotation of the word that is developed in poststructuralist theory, especially (but not exclusively) Jean Baudrillard. “Simulation,” Baudrillard (1983, 1) wrote in his now famous eponymous essay, “is no longer the that of a territory, a referential being or substance. It is the generation by models of a real without origin or reality.” This version of simulation is not simply the opposite of what is operationalized in computer science. It is its deconstruction. In the context of this sentence, simulation is used in the computer science sense of the word, though that use is accompanied or, perhaps better stated, haunted by both the word’s etymology and its deconstruction in Baudrillard’s work. 1.
Even when these models are trained on other kinds of media content—as in contemporary multimodal systems—this content is a kind of writing insofar as it is recorded and preserved in the sign system of digital data. 1.
Derrida directly addresses this scene of writing and the written dialogue in which it appears in the essay “Plato’s Pharmacy” (Derrida 1981b). 1.
These three items come directly from the anonymous reviews of the initial submission. I note this for two reasons: 1) to express my gratitude to the reviewers for their insight and for the work they put into reading and responding to the initial draft, and 2) to recognize the continued importance and viability of the peer review process in academic publishing.
References
Aristotle (1938). In: Cooke HP (ed) Categories on interpretation prior analytics. Harvard University Press, Cambridge, MA
Google Scholar
Barthes R (1978) Death of the author. In: Stephen H (ed) Image, music, text. Hill & Wang, New York, pp 142–148
Google Scholar
Baudrillard J (1983). In: Paul F, Paul P, Philip B (eds) Simulations. Semiotext(e), New York
Google Scholar
Bender EM, Gebru T, McMillan-Major A, Shmitchell S (2021) On the dangers of stochastic parrots: can language models be too big. Proceedings of the 2021 conference on fairness accountability and transparency, March 3–10, 2021 (FAccT ‘21), virtual event. ACM, Canada
Google Scholar
Bogost I (2022) ChatGPT is dumber than you think. The Atlantic*.* https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386.
Bradley A (2011) Originary technicity: the theory of technology from Marx to Derrida. Palgrave Macmillan, New York
Book Google Scholar
Coeckelbergh M, Gunkel DJ (2025) Communicative AI: a critical introduction to large language models. Polity, Cambridge
Google Scholar
de Saussure F (1959). In: Wade B (ed) Course in general linguistics. Peter Owen, London
Google Scholar
Derrida J (1976). In: Gayatri CS (ed) Of grammatology. The Johns Hopkins University Press, Baltimore, MD
Google Scholar
Derrida J (1978). In: Alan B (ed) Writing and difference. University of Chicago Press, Chicago
Google Scholar
Derrida J (1981). In: Alan B (ed) Positions. University of Chicago Press, Chicago, p 1981
Google Scholar
Derrida J (1981). In: Barbara J (ed) Dissemination. University of Chicago Press, Chicago
Chapter Google Scholar
Derrida J (1982). In: Alan B (ed) Margins of philosophy. University of Chicago Press, Chicago
Chapter Google Scholar
Derrida J (1993) Limited Inc. Northwestern University Press, Evanston, IL
Google Scholar
Derrida J (2008). In: David W, Marie-Louise M (eds) The animal that therefore I am. Fordham University Press, New York
Google Scholar
Fares M, Kutuzov A, Oepen S, Velldal E (2017) Word vectors, reuse, and replicability: Towards a community repository of large-text resources. In: Tiedemann J (ed) Proceedings of the 21st Nordic conference on computational linguistics (NoDaLiDa), 22–24 May 2017. Linköping University Electronic Press, Linköping
Google Scholar
Gunkel DJ (2021) Deconstruction. MIT Press, Cambridge, MA
Book Google Scholar
Harris R (2003) Saussure and his interpreters. University of Edinburgh Press, Edinburgh
Google Scholar
Hicks MT, Humphries J, Slater J (2024) ChatGPT is bullshit. Ethics Inf Technol. https://doi.org/10.1007/s10676-024-09775-5
Article Google Scholar
Josephson-Storm JA (2017) The myth of disenchantment: magic, modernity, and the birth of the human sciences. University of Chicago Press, Chicago, IL
Book Google Scholar
Kaplan J (2025) Generative artificial intelligence: what everyone needs to know. Oxford University Press, New York
Google Scholar
Plato (1982). In: Harold NF (ed) Euthyphro, apology, Crito, Phaedo, Phaedrus. Harvard University Press, Cambridge, MA
Google Scholar
Salmon P (2021) An event, perhaps: a biography of Jacques Derrida. Verso, London
Google Scholar
Searle J (1999) The Chinese room. In: Wilson RA, Keil F (eds) The MIT encyclopedia of the cognitive sciences. MIT Press, Cambridge, MA, pp 115–116
Google Scholar
Turing A (1950) Computing machinery and intelligence. Mind 59(236):433–460. https://doi.org/10.1093/mind/LIX.236.433
Article MathSciNet Google Scholar
Weatherby L (2025) Language machines: cultural AI and the end of remainder Humanism. University of Minnesota Press, Minneapolis, MN
Google Scholar
Weil E (2023) You are not a parrot: and a chatbot is not a human: and a Linguist named Emily M. bender is very worried what will happen when we forget this. New York Magazine, March 1. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html.
Wittgenstein L (1995). In: Pears DF, McGuinness BF (eds) Tractatus logico-philosophicus. Routledge, New York
Author information
Authors and Affiliations
Northern Illinois University, DeKalb, USA
David J. Gunkel
Authors
- David J. Gunkel
Contributions
DJG researched, wrote, and reviewed the entire manuscript.
Corresponding author
Correspondence to David J. Gunkel.
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Gunkel, D. The différance engine: large language models and poststructuralism. AI & Soc (2025). https://doi.org/10.1007/s00146-025-02640-z
Received: 12 April 2025
Accepted: 16 September 2025
Published: 25 September 2025
Version of record: 25 September 2025
DOI: https://doi.org/10.1007/s00146-025-02640-z