Merriam-Webster named "slop" its 2025 word of the year, defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” In its announcement, Merriam-Webster noted that, like "slime, sludge, and muck, slop has the wet sound of something you don’t want to touch.” Similarly, The New York Times observed that slop, in graphic terms, “conjures images of heaps of unappetizing food being shoveled into troughs.”
Slop is an um…
Merriam-Webster named "slop" its 2025 word of the year, defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” In its announcement, Merriam-Webster noted that, like "slime, sludge, and muck, slop has the wet sound of something you don’t want to touch.” Similarly, The New York Times observed that slop, in graphic terms, “conjures images of heaps of unappetizing food being shoveled into troughs.”
Slop is an umbrella term that encompasses a vast range of terrible AI-generated content, from videos to news stories to ads to books to work reports. It can look real enough, but often seems just a bit off (and sometimes jarringly so). And it tends to feel (or even be) cheap, derivative, or recycled. These qualities of slop can leave us feeling cold, disengaged, and anxious. As such, Merriam-Webster’s choice reflects a deeper psychological crisis about how AI-generated content is reshaping our emotional landscape.
The Slop-Doom Feedback Loop
In her recent article, Laura Glitson (2025), a humanities scholar, wrote about “the experience of slop and doom .... whereby experience is shaped by gothic undertones such as surreality, paranoia, suspicion, menace, and most of all, anxiety.” Glitson quotes from a Reddit post that captured this malaise: “The internet makes me miserable for 80% of the time I’m on it, but I just can’t get out …. Have I developed mild to moderate anxiety from constant exposure to news and social media that indicate we’re headed to unavoidable collapse? Sure have.”
Glitson outlined a circular process through which slop leads to doom by overwhelming our attention with “affective noise”; conversely, doom powers slop by coloring it with the threat of impending catastrophe. Her analysis parallels the emotional consequences of doomscrolling uncovered by psychology researchers (e.g., Taskin et al., 2024) and ties in with research on the effects of information overload.
The Normalization Paradox
Yoshija Walter (2024) writes about the possible psychological processes by which AI, including slop, is becoming normalized, as well as the possible psychological outcomes of this normalization. Walter summarizes research that shows normalization is occurring and highlights two mechanisms that may be underway. First, he describes a classic psychological process, the “mere exposure effect,” whereby people develop more positive views of people or things over time just by being exposed to them. This can be a good thing, in that people may increasingly perceive the ways in which AI can be helpful. Unfortunately, it can also lead us to downplay the risks of AI, including the possibility that we will become over-reliant on it in harmful ways.
Second, he describes the “black box effect,” whereby we develop negative emotions, such as unease and a sense of foreboding, when we don’t understand how something – such as AI – works. Walter notes evidence that these feelings are evoked even among some who work for tech companies that develop AI. We wonder how slop might affect these two processes. Will the positive aspects of normalization (driven by the mere exposure effect) be upended by growing cynicism, or will we become complacent in the face of a growing inability to separate truths from falsehoods? And will the unease and anxiety fueled by the black-box effect be compounded by the growing presence of slop?
A Policy Role for Psychology
Walter concludes that “[a]s AI becomes more integrated into our daily lives, it is crucial to understand the associated psychosocial implications, especially concerning AI safety concerns.” These implications involve considerations of cognitive processes, emotional responses, and interpersonal interactions. Psychological science, which studies all of these areas, must be a part of any policymaking regarding AI in order for us reclaim our agency.
References
Glitsos, L. (2025). Living in scroll land: Anxiety and the experience of slop and doom. M/C Journal, 28(5). doi.org/10.5204/mcj.3191
Taskin, S., Yildirim Kurtulus, H., Satici, S. A., & Deniz, M. E. (2024). Doomscrolling and mental well‐being in social media users: A serial mediation through mindfulness and secondary traumatic stress. Journal of Community Psychology, 52(3), 512-524. doi.org/10.1002/jcop.23111
Walter, Y. (2024). The future of artificial intelligence will be “next to normal” – A perspective on future directions and the psychology of ai safety concerns. Nature Anthropology, 2, 10001. doi.org/10.35534/natanthropol.2024.10001