(Image credit: Getty Images)
A six-month investigation into AI-assisted development tools has uncovered over thirty security vulnerabilities that allow data exfiltration and, in some cases, remote code execution. The findings, described in the IDEsaster research report, show how AI agents embedded in IDEs such as Visual Studio Code, JetBrains products, Zed, and numerous commercial assistants can be manipulated into leaking sensitive information or executing attacker-controlled code.
According to the research, 100% of tested AI IDEs and coding assistants were vulnerable. Products affected include GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude C…
(Image credit: Getty Images)
A six-month investigation into AI-assisted development tools has uncovered over thirty security vulnerabilities that allow data exfiltration and, in some cases, remote code execution. The findings, described in the IDEsaster research report, show how AI agents embedded in IDEs such as Visual Studio Code, JetBrains products, Zed, and numerous commercial assistants can be manipulated into leaking sensitive information or executing attacker-controlled code.
According to the research, 100% of tested AI IDEs and coding assistants were vulnerable. Products affected include GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code, with at least twenty-four assigned CVEs and additional advisories from AWS.
“All AI IDEs... effectively ignore the base software... in their threat model. They treat their features as inherently safe because they’ve been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives,” said security researcher Ari Marzouk, speaking to The Hacker News.
Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.
The report concludes that short term, the vulnerability class cannot be eliminated because current IDEs were not built under what the researcher calls the “Secure for AI” principle. Mitigations exist for both developers and tool vendors, but the long-term fix requires fundamentally redesigning how IDEs allow AI agents to read, write, and act inside projects.
FollowTom’s Hardware on Google News, oradd us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.