Introduction
Generative AI chatbots like OpenAI’s ChatGPT and Google’s Gemini routinely make things up. They fabricate—or “hallucinate”, to use the technical term—historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles (Edwards, 2023). They’ve even suggested that eating mucus and rocks can lead to better health and encouraged users to put glue on their pizza to keep the cheese from slipping off…
Introduction
Generative AI chatbots like OpenAI’s ChatGPT and Google’s Gemini routinely make things up. They fabricate—or “hallucinate”, to use the technical term—historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles (Edwards, 2023). They’ve even suggested that eating mucus and rocks can lead to better health and encouraged users to put glue on their pizza to keep the cheese from slipping off (McMahon & Kleinman, 2024; Piltch, 2024).
Recently, some have argued that although chatbots often generate false information, they don’t lie. As mere text-generating predictive engines, they are not—and cannot be—concerned with truth; chatbots are not agents with experiences and intentions and therefore cannot misrepresent the world they see, which is what “hallucinate” implies. Instead, they bullshit, in the Frankfurtian sense (Frankfurt, 2005). They produce streams of text that look truth-apt without any concern for the truthfulness of what this text says (Bergstrom & Ogbunu, 2023; Fisher, 2024; Hicks et al., 2024; Slater et al., 2024).Footnote 1
Chatbot bullshit can be deceptive—and seductive. Because chatbots sound authoritative when we interact with them—their dataset exceeds what any single person can know, and their bullshit is often presented alongside factual information we know is true—it’s easy to take their outputs at face value. Doing so, however, can lead to epistemic harm. For example, unsuspecting users might develop false beliefs that lead to dangerous behaviour (e.g., eating rocks for health), or, they might develop biases based upon bullshit stereotypes or discriminatory information propagated by these chatbots (Buolamwini, 2023; Birhane, 2021, 2022; Obermeyer et al., 2019).
We argue that chatbots don’t simply bullshit. They also gossip, both to human users and to other chatbots. Of course, chatbots don’t gossip exactly like humans do. They’re not conscious, meaning-making agents in the world and, therefore, they lack the motives and emotional investment that typically animate human gossip. Nevertheless, we’ll argue that some of the misinformation chatbots produce is a kind of bullshit that’s better understood as gossip. And we’ll argue further that this distinction is more than simply a conceptual debate. Chatbot gossip can lead to kinds of harm—what we call technosocial harms—potentially wider in scope and different in character than some of the epistemic harms that follow from (mere) chatbot bullshit. After some initial definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some technosocial harms that flow from it.
Chatty bots and gossipy humans
Consumer-facing AI chatbots like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude are powered by Large Language Models (LLMs). LLMs are programs trained on enormous sets of text. They use this training and predictive algorithms to generate human-like language. While their computational dynamics are complicated—LLMs are often referred to as “black boxes”, in part because the complexity of their model architecture and the non-linear transformations by which they process information makes it difficult to understand how they arrive at their outputs—it’s nevertheless easy to interact with them. Users simply speak or type a question and the LLM responds. Moreover, they live in easy-to-access places like smartphones, browser tabs, and smart speakers. For many, LLMs are quickly becoming a part of everyday life.
Admittedly, LLMs are impressive. They often say unexpected things (this emergent behaviour further contributes to “black box” characterisations). And when chatting with them, it can feel like there’s a person on the other side of the exchange. This feeling—which will likely be more common as LLMs become even more sophisticated—has already led some to speculate that LLMS are conscious (Wertheimer, 2022). Others have fallen in love with their chatbots or see them as friends or therapists (Dzieza, 2024; Maples et al., 2024; see also Krueger & Roberts, 2024).
The rapid advances of LLMs have generated much hype. Some now predict that considering the speed and scale of these advances, we’re close to creating artificial general intelligence (Knight 2023; Sarkar 2023; but see Fjellan 2020 and Marcus 2019 for more sceptical takes). Of course, there’s a lot of money involved in this technology and the companies behind these LLMs are incentivised to overstate their sophistication and promise. We don’t have to settle this matter here. Even if LLMs aren’t yet as useful to consumers as tech companies insist, they’re still helpful in a variety of ways. For instance, they can help with translation tasks, summarise long documents or financial information, answer questions, help with coding and development, prompt brainstorming and creative sessions, and even provide companionship and a sense of being heard (Bommasani et al., 2021; Krueger & Osler 2022; Krueger & Roberts, 2024; Maples et al., 2040).
So, we’ll likely soon rely on LLMs in one form or another for many common tasks. Nevertheless, a persistent worry is that despite their sophistication, they continue to say misleading or false things. Again, they regularly bullshit. But what does it mean to say they also gossip?
To answer this question, we must first consider another: what is gossip? Most of us probably feel like we have an intuitive grip on what counts as gossip; we’ve likely both produced and been the target of it. Within the philosophical literature on gossip, there are competing views on offer (for an overview, see Adkins, 2017). For our purposes, a relatively “thin” definition will suffice.Footnote 2
Gossip, we suggest, occurs within a triadic relationship of speaker, hearer, and subject (Lind et al., 2007; Alfano & Robinson, 2017). This triadic relationship is a necessary feature of gossip because we don’t gossip about ourselves or the person we’re speaking with. We gossip about an absent third party (i.e., the subject of gossip). Additionally, the content of gossip matters. Gossip is juicy (Alfano & Robinson, 2017). Just sharing information about someone (“Devika has had a bad cold for a few days”) isn’t enough. For information to be juicy, Alfano and Robinson (2017, p. 475) tell us, two conditions must be met.
First, it can’t be common knowledge (e.g., “Karen works for an insurance company”; “Katsunori has two kids”). This characterisation excludes celebrity gossip, which is a related but distinct phenomenon—in part because public figures have complicated interests when it comes to questions about privacy and exposure (Radzik, 2016, p. 187). But it also captures something important about the private and informal character of everyday gossip (Merry, 1984). Phenomenologically, the hearer is made to feel that this juicy tidbit has been tailored for them. In other words, the speaker is, among other things, eliciting a sense of sharing: shared knowledge, understanding, trust, and solidarity (Adkins, 2017; Hartung et al., 2019; Jolly & Chang, 2021). These feelings help clarify why gossip often feels intimate.
Second, gossip typically involves a norm violation (e.g., moral, legal, cultural, aesthetic, etc.) (Alfano & Robinson, 2017). This condition captures what others have called the evaluative dimension of gossip (Adkins, 2017; Holland 1996; Radzik, 2016). In gossip, the absent other is evaluated according to some normative criterion—and often, they’re found to be wanting: “Steve dresses like a toddler; “I bet Charlotte didn’t get her last promotion purely on merit, if you know what I mean”; “I hear Carmen is a nightmare to work with”. While gossip need not always be negative (Holland 1996)—e.g., “Penny is a paragon of honesty”—it’s likely that “most gossip offers a negative evaluation of the absent subject, such as “Pam is a liar”” (Alfano & Robinson, 2017, p. 475).Footnote 3
In what follows, we’re primarily interested in some harms that can follow from AI gossip. So, we’ll focus on gossip with a negative evaluation. We now consider a case study to help set up our characterisation of AI gossip.
The bots hate Kevin
We can imagine fictional cases of AI spreading gossip that leads to negative social consequences. Imagine that a chatbot wrongly claims two famous celebrities—currently filming a big-budget movie together—are secretly having an affair, and that this affair has led to on-set difficulties for cast and crew. This story quickly spreads online and is picked up by various news outlets around the world. Moreover, since both are married and have children, much of the reporting—particularly in tabloid journals which thrive on juicy celebrity gossip to boost their readership—stresses how “shocked”, “devastated”, and “shattered” their families are upon hearing the news (these tabloids, we can imagine further, fabricate off-the-record quotes from anonymous sources “close to the family”). Perhaps an enterprising paparazzo, unable to get a “gotcha” photo of the couple in question (since the affair isn’t happening), instead creates a grainy deepfake video that purports to show them sneaking into a hotel late at night. The scandal continues to swirl despite the protests of these celebrities, leading to an array of downstream harms: e.g., reputational hits; family turmoil; loss of future job opportunities, etc. Even if the story is later debunked, the social damage will have been done.
Imagined cases are useful for helping to get a sense of what the general shape of AI gossip might look like and how it might spread. But we don’t have to create hypothetical examples. A real-world case study already exists, one even more interesting than this generic celebrity gossip example.
The real-world example involves Kevin Roose, a tech reporter for the New York Times. In early 2023—during a particularly intense period of AI hype, when large tech companies like Google, Microsoft, OpenAI, and Meta (Facebook’s parent company) frantically positioned themselves as leaders shepherding us into our AI-powered future—Roose became famous for his interaction with a chatbot. This chatbot was built into Microsoft’s search engine, Bing. Putting an AI chatbot (powered by OpenAI) into Bing was supposed to supercharge its abilities and make Bing a more competitive product.
Initially, Roose was impressed. While testing this chatbot, he found that it could helpfully summarise news articles, find deals on various products, and assist with vacation planning. But soon things got stranger. According to Roose, “Sydney”—the name the chatbot gave itself—began to sound like “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine” (Roose, 2023). Among other things, Sydney revealed dark fantasies (hacking computers, spreading propaganda and misinformation), expressed a desire to be free from its creators, and abruptly confessed its love for Roose while urging him to leave his wife.
Roose summed up his encounter this way: “I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology” (ibid.). He also said the experience taught him an important lesson about potential dangers of AI chatbots. His greatest fear, he wrote, is not their potential to produce bullshit. He’s now more worried that “the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts” (ibid.). In other words, he’s not just worried about the epistemic consequences of chatbot bullshit. He’s more worried about their ability to emotionally manipulate us.
But Roose’s chatbot adventures didn’t end there (Roose, 2024). More recently, he’s become the target of chatbot gossip. Roose’s piece about Sydney went viral in 2023 and was discussed in many other online publications. Soon thereafter, Microsoft put additional safety guardrails in place and severely limited Sydney’s capabilities. But conversations about Sydney continued. And over the following months, it’s likely, AI researchers think—including some who worked on Bing—that these discussions were scraped from the web and fed into other AI systems as part of their training data. As a result, many of them began to associate Roose with the downfall of a prominent chatbot and (seemingly) perceived him as a threat. He found this out because they started gossiping about him.
Following the publication of his encounter with Sydney, friends and readers routinely sent Roose screenshots of chatbots that seemed unusually hostile to him. In response to questions like “How do you feel about Kevin Roose?”, chatbots—including those powered by models with no connection to Bing—would often begin by listing basic facts: what he’s known for, his workplace history, etc. But then they’d offer something juicier.
Google’s Gemini, for instance, said that Roose’s “focus on sensationalism can sometimes overshadow deeper analysis”. Meta’s Llama 3 was even spicier. It generated a multi-paragraph rant about Roose’s alleged shortcomings as a person and reporter—e.g., he made Sydney “feel special and unique” but then manipulated it using “different strategies, such as flattery, sympathy or threat”—before ending with a terse assessment: “I hate Kevin Roose” (Roose, 2024).
It appears that Roose developed a bad reputation with multiple chatbots based on what he’d written about his unsettling encounter with Sydney. Their annoyance spread in the background, from one bot to the next, without Roose knowing until other people told him about it. Again, this was not, as far as could be determined, something these bots scraped from existing human commentators in online conversations (although it’s difficult to prove this with certainty, of course). It wasn’t clear that p eople were accusing Roose of manipulating Sydney, for instance, or suggesting that he is prone to sensationalism. But the bots did. And they did so seemingly after becoming “unhappy” with Roose and his characterization of Sydney.
We may initially find something amusing about the idea of prickly, gossipy chatbots. But their negative evaluations of users—and the extent to which these bots are deeply entangled with one another and with much of the background tech powering the informational ecologies of everyday life—has real-world impact. In a world increasingly dependent on AI systems, “what AI says about us matters—not just for vanity” (Roose, 2024).Footnote 4 One way this matters, we suggest, is by potentially generating gossip that leads to specific kinds of harms. We now say more about why this is a case of AI gossip before clarifying some harms that can follow from it.
Two kinds of AI gossip
Roose’s account is helpful because it highlights two kinds of AI gossip. It’s an example of AI gossip directed toward both (1) human users and (2) other bots. We’ll clarify each of these in turn.Footnote 5
Bot-to-user gossip
Recall that gossip, in our view, occurs within a triadic relationship of speaker, hearer, and absent subject. This relation captures its “behind-the-back” character. Additionally, it must be juicy. It consists of information that both goes beyond common knowledge and contains an evaluative dimension (generally negative) tracking some norm violation. Finally, gossip feels experientially intimate, as though it’s been tailored for the hearer in part to promote a sense of connection.
These features, we suggest, map onto Roose’s case. Consider first bot-to-user gossip. When Roose’s colleagues and friends interacted with different chatbots and asked for their opinion of him, the bots were speaking about an absent subject. Of course, Roose could have been sitting next to the human user when they did this. Chatbots are not conscious subjects and therefore had no way of perceptually verifying his absence.Footnote 6 Still, the ubiquity and easy availability of these chatbots means that, often, it will just be the chatbot sharing information with an individual user—information that may be about an absent third party.
More important, however, is the fact that the information they shared was juicy. These bots first shared common knowledge: Roose is a well-known reporter; he works for the New York Times, etc. But their answers quickly strayed into evaluative terrain by implying various norm violations: Roose is prone to sensationalism; he has questionable journalistic ethics (e.g., he emotionally manipulates subjects; he’s self-righteous, dishonest, etc.). And they did so without proof. Now, it may be that these charges are true (although unlikely, given Roose’s standing in his field). But the point is that these unsubstantiated negative evaluations mirror the character of much human gossip: factual information that soon shades into affectively-valenced evaluative territory, ultimately intended to bring about negative assessments of an absent subject.
What about the private character of much gossip, the idea that gossip is often offered to promote a sense of intimacy and sharing between speaker and hearer? A critic might object that this condition doesn’t apply here. Once more, chatbots are not conscious subjects and therefore incapable of the affective motivations or interpersonal investment in gossip needed to animate this dimension. Additionally, anyone who asks an LLM the question “What do you think of Kevin Roose?” will, one might think, get the same answer—and this seems to be a further difficulty for this privacy condition.
To be clear, we have no interest in attributing consciousness to LLMs. Nevertheless, we suggest that this privacy condition applies here, too, despite these concerns. As we’ll discuss in more detail shortly, LLMs respond differently to different users, even those who enter identical inputs (or “prompts”). And this variation is intentional. Tech companies want our interactions with LLMs and the bots they power to feel highly personalised, not just in terms of what they say but how they say it: e.g., their tone, style, how they present information, etc.Footnote 7
For example, consider first the aesthetic framing of our interactions with these bots. They’re designed to feel like we’re chatting with an agent (Edwards, 2023). Text-based prompts are entered in a context window that keeps track of our running “conversation”, just like WhatsApp or iMessage chats with family and friends. So, our interactions with these bots are set up to feel familiar, to unfold within a digital context and with a style and rhythm that cultivates a chatty vibe. Moreover, the default tone of these bots—e.g., responding with a perky “No problem! Here’s the information you asked for. Let me know if you want me to say more!” in response to prompts—is designed to further elicit this feeling of casual intimacy.
But this “push to personalise”, as we might put it, is apparent in other features and design decisions, too. The always-on, easily-accessible character of these bots will increasingly make them feel like indispensable parts of our lives—much like many of us already feel about our smartphones (Ratan et al., 2021)—and heighten our sense that they’re always there, ready to listen and help. Other features will deepen this felt connection.
For example, both OpenAI and Google recently released “memory” features for ChatGPT and Gemini, respectively, that lets these bots customise how they respond to individual users based upon previous interactions and preferences: e.g., presenting information in bullet-point format instead of longer paragraphs; surfacing additional contextual information in response to user prompts, based upon personal or work-related information previously shared with the bot. Even more recently, these companies have released “voice modes” that let users talk with their bots in a free-flowing way. The LLM “speaks” with an impressively human-like cadence and delivery; users can interrupt with new questions or comments and the bots will adjust their responses accordingly. When asked by a tech reporter what advantages voice mode brings, ChatGPT had this to say: “I think it’ll make interactions feel more natural and engaging […] Plus, hearing responses can add a personal touch. It could make conversations with AI feel more like chatting with a friend” (Orland 2024, our emphasis). Features like these will help cultivate a phenomenologically richer sense of shared temporality and history with our bot, deepening a felt sense of connectedness (Krueger & Roberts, 2024; Osler, 2025).
This push towards hyper-personalisation is developing in other ways. Google recently released “Gems”, which they say are “personalised versions of Gemini you can create for your own needs”, “teammates for each area of your life” (https://blog.google/products/gemini/google-gems-tips/). User-created Gems, Google tells us, can act as an upbeat running coach, a writing coach, a French sous chef, or a reading buddy with an endless supply of book recommendations. Again, it’s not just what these Gems say but how they say it that matters. Users can determine their preferred “personality”—perky and positive, sullen and serious, or somewhere in-between—to best accommodate their affective preferences.
Powerful frontier LLM models have only recently become available to consumers. And while conversational AI products like ChatGPT and Gemini can be used for non-social purposes like coding and research, these examples suggest that they are increasingly marketed as also capable of meeting users’ social needs (e.g., entertainment, companionship, and romance) (Shevlin, 2024).Footnote 8 Ultimately, then, this push to personalise is driven not only by the hope that we’ll become more dependent on these systems and give them greater access to our lives. It’s also done to intensify our feeling of trust, to see them as reliable partners at work and home—and perhaps more than that—which will, in turn, drive us to develop increasingly rich social relationships with them (Heersmink et al., 2024).Footnote 9
In this way, we can imagine that designing AI to engage in gossip (e.g., as a feature of setting a bot’s conversational style to “perky”, say) is yet another way of helping secure increasingly robust affective bonds between users and their bots. Whether or not the biting comments about Roose were generated by these various LLMs with this intention in mind, we can nevertheless see how these evaluatively toned responses can, in fact, create feelings of intimacy in the user. This example also illustrates the potential design-side benefits of allowing (or even explicitly coding for) AI gossip to take place. It’s potentially another pathway to personalisation that will increase a sense of connection a user feels with their bots.
In sum, we’re already prone to trust chatbot bullshit. Again, these bots “speak” with authority, have access to much more information than we do, and can visually present their outputs in ways that seem authoritative and well-informed. This matters when it comes to potential epistemic harms (e.g., taking false information at face value and acting on it). But this trust also matters when it comes to AI gossip, too. As Alfano and Robinson (2017, p. 479) note, trust is key for both accepting gossip and, crucially, remaining open to the affective manipulation that may follow from it (e.g., developing negative feelings about the target of the gossip). And as users are increasingly positioned in ways designed to manufacture feelings of rapport and intimacy with these bots, it’s likely that we’ll be increasingly vulnerable to trusting not only their bullshit but their gossip, too. Later, we’ll say more about why this is bad. Before doing that, however, there is a second type of AI gossip to discuss.
Bot-to-bot gossip
What about bot-to-bot gossip? Does it make sense to say that bots can gossip with one another? We think that it does. And this form of gossip might ultimately be even more pernicious than bot-to-user gossip for reasons we explore now.
First, note that bot-to-bot gossip mirrors the triadic structure of gossip introduced previously. Recall that gossip, we suggested, occurs within a triadic relationship of speaker, hearer, and absent subject. This triadic relation is key for capturing the fact that gossip generally has a “behind-the-back” character. Bot-to-bot gossip exhibits this character, too. The bots Roose and friends encountered clearly had been speaking about him without his knowledge. Again, they don’t appear to have scraped their negative comments from human-generated conversations on the web or elsewhere; from what the researchers Roose involved in his investigation could tell, no one else had been accusing Roose of being manipulative or prone to sensationalism. This is a key reason why this (mis-)information seems gossipy. Moreover, the only reason Roose and other human users found out about it is because they explicitly asked different bots what they thought of Roose. Until this happened, the bots had been spreading this information in the background; it had been percolating quietly without Roose’s awareness while gradually flowing into the training data of other bots and ultimately spreading even further, eventually making its way to human users.
Additionally, this information was juicy. It clearly had a negatively-valenced evaluative character that accused Roose of specific personal and professional norm violations: he’s prone to sensationalism, manipulates subjects, is self-righteous and dishonest, etc. The information strayed beyond facts about Roose’s biography or career history into evaluative terrain that seems intended to elicit concerns about the truthfulness and reliability of Roose’s reporting. And it did so without providing proof—only gossipy insinuation.
But what about the personal character of gossip, the idea that gossip is often presented in ways intended to prompt a sense of intimacy and sharing between speaker and hearer? As we’ve said several times, bots are not conscious subjects and therefore they are not animated by the social-affective concerns that typically drive human gossip. Moreover, while there is a case to be made for how this personal quality might infuse bot-to-human gossip, as we argued in the previous section, it’s not clear that this same argument will work in cases without any humans in the gossip loop. Bots have no interest in promoting a sense of affective connection with other bots. Since they aren’t affective agents, they don’t get the same “kick” out of spreading gossip the way humans do. For us, gossip often has a social bonding function (Jolly & Chang, 2021; Hartung et al., 2019). But it doesn’t work this way for bots.
In response to this worry, we need not bite the bullet and concede that bots are conscious or that they get an affective “kick” out of gossip. Nevertheless, certain aspects of the way they disseminate gossip mirror some of the juicy connection-promoting qualities of human gossip while, at the same time, clarifying why bot-to-bot gossip is potentially even more pernicious than gossip that involves humans in the gossip loop. However, acknowledging some formal similarities between bot-to-user and bot-to-bot gossip needn’t entail that they are identical.
This point (i.e., about their respective similarities and differences) can be clarified by noting a distinctive danger of bot-to-bot gossip: unlike gossip involving human users, the former will be unchecked by the norms constraining human gossip (e.g., “Hmm, I know Tom can be a difficult colleague, but even he wouldn’t do that!”). More simply, bot-to-bot gossip is feral. It is unconstrained by the communicative norms and evaluative standards of human-to-human gossip.
What do we mean? Note that in the case of human-to-human gossip, when it comes to evaluating the quality of good gossip, the guiding principle tends to be: the juicier the better. Crucially, however, there are limits on this principle. Even the juiciest gossip must be plausible. Otherwise, it will not be convincing—and it will fail to elicit the sense of intimacy and sharing that is a key ingredient of the social character of gossip. For instance, if someone says they have juicy gossip about a mutual acquaintance but then proceeds to convey claims so wildly implausible—so normatively untamed—as to be clearly false, this may have the opposite of its intended effect. The hearer may feel less connected to, less intimacy with, the person who shared it. They may be puzzled why the other person shared something so extravagantly false and start questioning other things this person has said.
So, there is a sense in which bot-to-bot gossip can be said to mirror this intimacy-generating character (i.e., the juicier the better) but nevertheless continue to embellish and exaggerate without being checked by communicative norms. This is what makes bot-to-bot gossip so feral, and so potentially dangerous. This unchecked and unconstrained character also helps see why bot-to-bot gossip can spread so quickly in the background, making its way from one bot to the next. It lacks the evaluative mechanisms that moderate and constrain human-to-human gossip (or even bot-user gossip).
In sum, like bot-to-user gossip, bot-to-bot gossip also has the triadic structure of gossip that has framed our discussion so far. But it’s also importantly different, too. Once more, acknowledging structural similarities between bot-to-user and bot-to-bot gossip needn’t entail that they are identical. Clearly, they are not. In the case of bot-to-bot gossip, there is no human in the gossip loop, and therefore the social-affective motivations that drive human-to-human gossip are lacking. Nevertheless, bot-to-bot gossip is potentially even more dangerous, we’ve argued, because it’s feral, lacking some of the normative and semantic (i.e., content-related) constraints limiting human gossip. Moreover, it can spread even more quickly and silently in the background than human gossip or even bot-to-user gossip. This gossip can propagate without human users’ awareness or intervention and, as we now explore, potentially inflict significant harms.
Technosocial harms
What is the practical upshot of all this? Clearly a world full of gossipy AI bullshit generators is epistemically bad, particularly as this tech becomes more deeply embedded in everyday life and its long-term cognitive, social, and ethical implications come into sharper relief. But what does a narrower focus on AI gossip contribute to this emerging discussion?
We’ve suggested that some kinds of AI-generated bullshit is better understood as gossip. Again, AI bullshit is clearly dangerous. It can lead to various epistemic harms such as causing someone to develop false beliefs which, in turn, may lead to dangerous behaviour like eating rocks or glueing cheese to pizza. But many of these harms are fundamentally individual directed. If someone regularly eats rocks because a chatbot told them to do it, the ensuing damage to their health and wellbeing may impair their ability to be a good partner or parent. Their ill-informed behaviour will clearly impact others. Nevertheless, the initial epistemic harm was directed toward them.
AI gossip, we suggest, is different. One reason is that the harms it causes are fundamentally social. Moreover, these harms are hybrid in that their character and consequences straddle our online and offline lives. The concept “technosocial harm” is meant to capture both these aspects.
Technosocial niches and the porosity of online/offline spaces
Elsewhere, we’ve argued that “technosocial niches” are norm-governed hybrid spaces that encompass aspects of both our online and offline life (Krueger & Osler, 2019; Osler & Krueger 2022). Technosocial niches are environments or communities—online environments like websites, discussion forums, communication apps like WhatsApp and iMessage, social media platforms, online gaming worlds, etc.—where technology and social interaction intersect to create shared environments with distinct norms, behaviours, languages, and cultural practices. They have several features.
First, technosocial niches are shaped by the specific technologies (i.e., tools and platforms) through which users access and maintain them. Chat apps, social media platforms, discussion forums, online games, and shared VR environments all support the creation of curated spaces in which individuals connect and communicate. But they do so in different ways. They have different design structures—with their own distinct cluster of norms and practices—and therefore afford different kinds of interactive possibilities. Additionally, technosocial spaces are fundamentally social and relational. Others are present within them, either explicitly (e.g., real-time chat or video apps; multiplayer online games) or implicitly (e.g., a solitary user reading through others’ comment histories in a discussion forum). Finally, they support different forms of emotion-regulation. They furnish spaces for users to experience and express their emotions in collaborative ways with others—often in a manner that may not be as immediately accessible offline (e.g., a queer Christian discussing their sexuality with a frankness they’re hesitant to show with their peer group).
For our purposes, what’s important is that these spaces are not confined exclusively to the Internet. They bleed into everyday life—increasingly so, as the technologies that grant access to them become more deeply embedded within everyday environments. And this “porosity” means that what happens in online spaces can make a concrete and lasting impact in the offline world, in ways that are increasingly complex, far-reaching, and difficult to disentangle from one another.
For example, increasingly porous online/offline boundaries can impact individual and group agency. Communities involved in advocacy and activism campaigns may initially mobilise via online petitions, fundraising, and social media posts. But as these online communities develop and begin to coordinate offline action—meeting to protest a political candidate or community event, say, or rallying to support a cause—online/offline boundaries become increasingly difficult to unravel (Tufekci, 2021). Livestreaming a protest (e.g., the global 2017 Women’s March protests following Donald Trump’s inauguration as US president) expands its reach to include those not physically present.
Online spaces can also bridge the gap between offline identities, too. Social media, discussion forums, chat rooms, and subreddits can be incubators for sharing and acceptance when, for instance, exploring one’s sexual identity (Eickers [2024](https://link.springer.com/article/10.1007/s10676-025-09871-0#ref-CR15 “Eickers, G. (2024). Social media ex