For a product meant to make work easier, Microsoft’s Copilot is inspiring a lot of rage — and an internet-wide roast to match. Scroll through TikTok or Instagram and you’ll find an entire micro-genre dedicated to portraying Copilot as an annoying, incompetent coworker who simply will not go away. One such Reel, the language of which is not safe for work, has 327,006 "likes" and counting.
Part of the problem seems to be that Copilot isn’t one thing at all, but Microsoft’s umbrella term for dozens of different AI assistants scattered across its products, from Outlook and Word to Windows, Teams, Edge, and beyond. They share a name, but not necessarily capabilities, …
For a product meant to make work easier, Microsoft’s Copilot is inspiring a lot of rage — and an internet-wide roast to match. Scroll through TikTok or Instagram and you’ll find an entire micro-genre dedicated to portraying Copilot as an annoying, incompetent coworker who simply will not go away. One such Reel, the language of which is not safe for work, has 327,006 "likes" and counting.
Part of the problem seems to be that Copilot isn’t one thing at all, but Microsoft’s umbrella term for dozens of different AI assistants scattered across its products, from Outlook and Word to Windows, Teams, Edge, and beyond. They share a name, but not necessarily capabilities, behavior patterns, or degrees of reliability, which some users describe as a branding problem before you even get to the UX.
On Reddit, a recent attempt to catalogue all the different Copilots quickly devolved into absurd humor, with one commenter cataloguing “teams copilot, outlook copilot, browser web copilot, browser work copilot, power automate copilot, power copilot, search bar copilot” and finally “copilot in the toilet,” with another responding, like Copilot at its most deranged, “the colonoscopy found inflamed tonsils.”
Beneath the jokes is frustration visceral enough that, according to The Information, Microsoft has quietly reduced sales targets across several divisions because enterprise adoption is lagging. For a company that rarely adjusts sales targets — at least within public view — it’s a notable change, suggesting that Copilot’s path into the workplace may be proving more complicated than Microsoft hoped.
The security problem
The frustration spans technical and non-technical users alike. On Reddit, one IT admin described building extra group-policy restrictions just to keep Copilot out of daily workflows. Another noted that the supposedly “Don’t allow Copilot” enterprise setting has stopped working, now redirecting users to a public, non-enterprise version of the tool — the last thing any security lead wants to hear.
Everyday users may not have the same security concerns, but feel the same lack of control. One freelance writer told Quartz she has a “burning hatred for the little icon that will not go away on my documents,” despite disabling Copilot, turning on Focus mode, combing message boards for solutions, and finally contacting Microsoft support. The only advice she received was to install an older version of Office — one lacking Copilot.
Another user, a financial analyst, described the Copilot agent as intensifying her dislike of the Microsoft product it’s meant to enhance. “Copilot lies sometimes, withholds information,” she said, and also “doesn’t do the greatest job helping me with things I hate about Excel.”
The forced-adoption problem
But the strongest frustration comes from workers who feel pressured — not just by the software, but by their managers — to use Copilot constantly.
One corporate trainer’s experience captures the full scope of the problem. Tasked with discovering how Copilot could be incorporated into everyday workflows, she received early enterprise access and was instructed to “think of something to ask AI to do” for every task throughout the day. After more than 100 hours with Copilot, she described the experience not as liberating but as performative — a constant need to demonstrate AI usage, even when it slowed her down. Copilot made her workload heavier, not lighter.
Under pressure to use Copilot to write emails, she used it to generate a first draft, then edited out its most annoying trademarks — passive voice, bullet lists, upbeat platitudes — a rewrite process that consumed more time than a simple, AI-free writing session. Ironically, her manager, intent on having everyone use Copilot, returned her emails rewritten by Copilot, re-adding the hallmarks the trainer laboriously removed and reminding her to please use Copilot.
Management’s use of Copilot to write teamwide emails proved just as problematic. “Motivational” emails became especially grating, the trainer said. “When directors, VPs, project managers, or any leadership level no longer convey their own voice, or their voice has been refined and hidden by Copilot’s composition (including emojis, passive voice, bullets etc.),” she said, “the message is completely lost and I’m distracted by the first impression that they couldn’t even be bothered to write to us themselves anymore.”
The workplace-surveillance problem
Less absurd and more troubling was how Copilot inadvertently created new kinds of workplace surveillance. The trainer described how the tool’s presence in Microsoft Teams introduced new forms of anxiety and embarrassment, with Copilot-generated meeting summaries sometimes misconstrued small talk, generating points like “Sam is very stressed from the workload,” or “Sam is unsure who is in charge of the project” — automated judgments no one had trained the model to avoid. As a result, coworkers stopped chatting to each other at the start of meetings because no one wanted their offhand comments captured, "summarized," and redistributed teamwide.
For Microsoft, however, none of this may matter much. The company has been here before, after all. Twenty years ago, Clippy — Word’s notorious animated paperclip assistant — became one of the most mocked features in software history. One version of the classic Clippy opener in particular became a widespread punchline: “It looks like you’re having an existential crisis. Do you need help?”
All the mockery was still proof that Microsoft Office had achieved total penetration. You can only become a cultural punching bag if everyone knows the product.
Not all backlash is fatal
The passion behind the complaints may be, ironically, a sign of Copilot’s enormous visibility and spread thus far. One way or another, it’s not a quiet flop. People encounter Copilot constantly. They’re forming strong opinions about it. They’re meme-ing it into relevance. They may be roasting it, but they’re engaging, and this in an era where brands covet social-media conversation.
What’s more, in a product category as young as AI agents, strong reactions could amount to free user research. The underlying message in all the frustration seems surprisingly consistent: people want AI tools that help them, not hover over them like a smug intern with a poor sense of timing.
In the best-case scenario, it’s possible the Copilot hate demonstrates that users are willing to engage, if only under pressure from management. The jokes, in their own way, could be a sign users *will *embrace Copilot when it masters the most important skill for any assistant, human, or machine: knowing when to help, and when to shut up.