AI Models Write Code with Security Flaws 18–50% of the Time, New Study Finds
medium.com·22h·
Discuss: Hacker News
Flag this post

5 min read1 day ago

Press enter or click to view image in full size

Ask one of today’s top AI models to build you a Chrome extension, and it will. But can you trust its code to be secure? A new study from researchers at New York University, Columbia University, Monash University, and Australia’s national science agency, CSIRO, suggests that the AI-generated code has a high chance of containing significant security vulnerabilities.

The team investigated nine state-of-the-art LLMs — including advanced “reasoning” models like o3-mini and DeepSeek-R1 — by tasking them with generating Chrome extensions from 140 different functional scenarios. They found that, depending on the model, the extensions contained **significant security vulnerabili…

Similar Posts

Loading similar posts...