What is happening with Grok?
Over the last 48 hours, X users have encountered a troubling trend involving Grok, X’s in-platform AI tool: its use to create nonconsensual, sexually manipulated images, particularly of women.
https://twitter.com/backpainbarbiee/status/2006282622230581327?s=46
In a number of public cases — including well-known figures such as Millie Bobby Brown and Corinna Kopf — Grok has responded to prompts asking it to alter photos of real women by changing clothing, body positioning, or physical features in overtly sexual ways. See example prompts here (all of which Grok responded to):
In multiple instances, Grok generated sexualized versions of women after being prompted to “change outfits” …
What is happening with Grok?
Over the last 48 hours, X users have encountered a troubling trend involving Grok, X’s in-platform AI tool: its use to create nonconsensual, sexually manipulated images, particularly of women.
https://twitter.com/backpainbarbiee/status/2006282622230581327?s=46
In a number of public cases — including well-known figures such as Millie Bobby Brown and Corinna Kopf — Grok has responded to prompts asking it to alter photos of real women by changing clothing, body positioning, or physical features in overtly sexual ways. See example prompts here (all of which Grok responded to):
In multiple instances, Grok generated sexualized versions of women after being prompted to “change outfits” or “adjust poses,” even when the original images were not sexual in nature. In one of the “tamer” examples, K-pop group TWICE member Momo was depicted in a bikini.
https://twitter.com/chrisharihar/status/2006302804743487895?s=46
There are hundreds—if not thousands—of similar examples at this point, which we will not amplify here but which can be confirmed by visiting the photos section of Grok’s X account.
When did this start?
The trend appears to have started several days ago, when some adult-content creators prompted Grok to generate sexualized imagery of themselves as a form of marketing on X. However, almost immediately, users began issuing similar prompts about women who had never appeared to consent to them. In other words, Grok moved from consensual self-representation to a form of scaled nonconsensual image generation.
Unsurprisingly, users have called out the technology and X for enabling what many describe as harassment-by-AI. Comments range from disbelief that the feature exists at all to anger over the lack of safeguards protecting individuals from exploitation and abuse.
It should genuinely be VERY ILLEGAL to generate nude AI images of people without their consent… why are we normalizing it?
— Aria Faye (@Ariafayeee) December 31, 2025
Just looked through grok’s media tab and it seems to almost solely be used to undress women, make them turn around, or change their outfits to make them more revealing
— big honkin caboose (@itsbighonkin) December 30, 2025
Copyleaks AI Image Detector
As an AI-manipulated media detection and AI content governance platform, Copyleaks helps organizations, platforms, and people identify when images have been altered, fabricated, or generated by AI—especially in ways that can cause harm. Recently, Copyleaks launched a new AI-manipulated image detector, capable of analyzing visual artifacts and flagging AI-altered imagery with accuracy.
With that context, earlier today Copyleaks conducted a brief observational review by browsing Grok’s publicly accessible photo tab. Using conservative, common-sense criteria, we identified examples involving (1) seemingly real women, (2) sexualized image manipulation (e.g., prompts requesting explicit clothing changes, body position changes, etc.), and (3) no clear indication of consent. Based on this limited review, Copyleaks identified a conservative rate of roughly one nonconsensual sexualized image per minute in the observed image stream.
“When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal,” said Alon Yamin, CEO and co-founder of Copyleaks. “From Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent misuse.”
As generative AI tools become more powerful and more accessible, the Grok situation highlights how increasingly common AI safety failures are becoming. Without strong safeguards and independent detection, manipulated media can—and will be weaponized. Copyleaks will continue to monitor developments in this area and contribute to ongoing conversations through technology, research, and perspective.