Snyk CTO Danny Allan warns that the rapid adoption of generative AI coding tools could create a new wave of vulnerabilities across the global software supply chain. As developers increasingly rely on AI assistants to generate, modify, and deploy code, Allan said the industry risks entering what he called a “software security crisis” — one driven by speed, scale, and insufficient guardrails.
AI “vibecoding,” a growing trend in which developers use large language models (LLMs) to quickly produce functional but unverified code, is already reshaping how software is written. While these tools can accelerate delivery and boost productivity, Allan noted that they also tend to reproduce insecure patterns found in the data they were trained on. Without rigorous validation and context-aware g…
Snyk CTO Danny Allan warns that the rapid adoption of generative AI coding tools could create a new wave of vulnerabilities across the global software supply chain. As developers increasingly rely on AI assistants to generate, modify, and deploy code, Allan said the industry risks entering what he called a “software security crisis” — one driven by speed, scale, and insufficient guardrails.
AI “vibecoding,” a growing trend in which developers use large language models (LLMs) to quickly produce functional but unverified code, is already reshaping how software is written. While these tools can accelerate delivery and boost productivity, Allan noted that they also tend to reproduce insecure patterns found in the data they were trained on. Without rigorous validation and context-aware governance, teams could be shipping vulnerabilities at a rate the industry has never seen before.
Allan argues that developers remain the first line of defense — but they need better, more integrated security feedback loops. Embedding security directly into developer workflows, he said, is critical to keeping pace with the surge of AI-generated code. That means using automated testing, dependency scanning, and real-time vulnerability detection as code is written, not after it’s deployed.
He also stressed that this moment is a turning point for DevSecOps: AI will be a powerful accelerator, but only for organizations disciplined enough to apply strong governance and maintain developer accountability. “The tools are getting smarter,” Allan said, “but so are the attackers.”
As AI transforms how software is built, Allan’s warning is clear — unless security evolves just as fast, the industry may be automating itself straight into the next major risk era.