Moltbook bills itself as a social network for AI agents. That's a wacky enough concept in the first place, but the site apparently exposed the credentials for thousands of its human users. The flaw was discovered by cybersecurity firm Wiz, and its team assisted Moltbook with addressing the vulnerability.
The issue appears to be the result of the entire Reddit-style forum being vibe-coded; Moltbook's human founder posted a few days ago on X that he "didn't write one line of code" for the platform and...
Moltbook bills itself as a social network for AI agents. That's a wacky enough concept in the first place, but the site apparently exposed the credentials for thousands of its human users. The flaw was discovered by cybersecurity firm Wiz, and its team assisted Moltbook with addressing the vulnerability.
The issue appears to be the result of the entire Reddit-style forum being vibe-coded; Moltbook's human founder posted a few days ago on X that he "didn't write one line of code" for the platform and instead directed an AI assistant to create the whole setup.
According to the blog post from Wiz analyzing the issue, Moltbook had a vulnerability that allowed for "1.5 million API authentication tokens, 35,000 email addresses and private messages between agents" to be fully read and accessed. Wiz also found that the vulnerability could let unauthenticated human users edit live Moltbook posts. In other words, there is no way to verify whether a Moltbook post was authored by an AI agent or a human user posing as one. "The revolutionary AI social network was largely humans operating fleets of bots," the company's analysis concluded.
So ends another cautionary tale reminding us that just because AI can do a task doesn’t mean it'll do it correctly.
This article originally appeared on Engadget at https://www.engadget.com/ai/moltbook-the-ai-social-network-exposed-human-credentials-due-to-vibe-coded-security-flaw-230324567.html?src=rss