Today’s episode of Decoder is about X, Grok, and Elon Musk. By now we’re several weeks into one of the worst, most upsetting, and most stupidly irresponsible AI controversies in the short history of generative AI. Grok, the chatbot made by Elon Musk’s xAI, is able to make all manner of AI-generated images, including non-consensual intimate images of women and minors.
Because Grok is connected to X, the platform formerly known as Twitter, users can simply ask Grok to edit any image on that platform, and Grok will mostly do it and then distribute that image across the entire platform. Across the last few weeks, X and Elon have claimed over and over that various guardrails have been imposed, but up until now they’ve been mostly trivial to get around. It’s now become clear that Elon wants Grok to be able to do this, and he’s very annoyed with anyone who wants him to stop, particularly the various governments around the world that are threatening to take legal action against X.
This is one of those situations where if you just describe the problem to someone, they will intuitively feel like someone should be able to do something about it. It’s true — someone should be able to do something about a one-click harassment machine like this that’s generating images about women and children without their consent. But who has that power, and what they can do with it, is a deeply complicated question, and it’s tied up in the thorny mess of history that is content moderation and the legal precedents that underpin it. So I invited Riana Pfefferkorn on the show to come talk me through all of this.
Riana has joined me before to explain some complicated internet moderation problems in the past. Right now, she’s the policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, and she has a deep background in what regulators and lawmakers in the US and around the world could do about a problem like Grok, if they so choose.
Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
So Riana really helped me work through the legal frameworks at play here, the various actors involved that have leverage and could apply pressure to affect the situation, and where we might see this all go as xAI does damage control but largely continues to ship this product continues to do real harm.