Matias Madou, Co-Founder & CTO, Secure Code Warrior
November 3, 2025
4 Min Read

Source: Carmen K. Sisson/Cloudybright via Alamy Stock Photo
OPINION
There are several best practice recommendations to help organizations mitigate the risks inherent in AI-generated code, and most highlight the importance of human-AI collaboration, with human developers taking a hand regularly (and literally) in the proce…
Matias Madou, Co-Founder & CTO, Secure Code Warrior
November 3, 2025
4 Min Read

Source: Carmen K. Sisson/Cloudybright via Alamy Stock Photo
OPINION
There are several best practice recommendations to help organizations mitigate the risks inherent in AI-generated code, and most highlight the importance of human-AI collaboration, with human developers taking a hand regularly (and literally) in the process. However, those recommendations also hinge on developers having a medium to high level of security proficiency, which is an area where many developers fall short. It’s up to organizations to ensure developers have current, verified security skills to work effectively with AI assistants and agents.
Vulnerabilities Increase as LLM Iterations Grow
LLMs have been a boon for developers since OpenAI’s ChatGPT was publicly released in November 2022, with other AI models fast on its heels. Developers were quick to utilize the tools, which significantly increased productivity for overtaxed development teams. But that productivity boost came with security concerns, such as AI models trained on flawed code from internal or publicly available repositories. Those models introduced vulnerabilities that sometimes spread throughout the entire software ecosystem.
One way to address the problem was to use LLMs to iteratively improve code-level security during the development process, under the assumption that LLMs, given the task of correcting mistakes, would correct them quickly and effectively. However, several studies (and extensive real-world experience, including our own data) have demonstrated that an LLM can introduce vulnerabilities into the code it generates during this process.
Related:Malicious NPM Packages Disguised With ‘Invisible’ Dependencies
There is no shortcut. Developers must maintain control of the development process, viewing AI as a collaborative assistant rather than an autonomous tool. Designers need to incorporate security features into their tools that detect potential vulnerabilities and send alerts when they are identified. And chief information security officers (CISOs), together with other security leaders in the business, can give the development cohort a solid foundation for success with these five steps:
Checkpoint 1 : Code review by security-proficient developers is non-negotiable.
This step would leverage human expertise as the first line of defense, providing a level of quality control that can’t be automated. However, security leaders must place developer upskilling at the heart of their security programs. Adaptive learning, verification of security skills, traceability of LLM tool usage and data-backed risk metrics should all form part of the modern, AI-augmented security program.
Related:AI-Generated Code Poses Security, Bloat Challenges
Checkpoint 2: Apply secure rulesets.
AI coding assistants might be powerful, but they need guidance. A contextual rule file works by steering them toward safe, standardized output, reducing the risk of non-compliant configuration or insecure coding patterns.
Checkpoint 3: Review each iteration.
Using both human experts and automated tools, organizations should check security at each step. While security-focused prompts generally produce more secure code than those that do not specifically instruct secure output, this still often results in vulnerable code.
Checkpoint 4: Apply AI governance best practices.
Automate policy enforcement to ensure AI-enabled developers meet secure coding standards before their contributions are accepted in critical repos.
Checkpoint 5: Monitor code complexity.
The likelihood of new vulnerabilities increases with code complexity, so human reviewers need to be alert when complexity rises.
The common thread in these recommendations is the need for human expertise, which is far from guaranteed. Software engineers typically receive very little security upskilling, if any at all, and have traditionally concentrated on quickly creating applications, upgrades and services while letting security teams chase after any pesky flaws later on. As AI tools accelerate DevOps, organizations must equip developers with the skills to ensure secure code throughout the software development life cycle (SDLC) to maintain security. To achieve this, they need to implement ongoing adaptive learning programs that provide developers with the necessary skills.
Related:It Takes Only 250 Documents to Poison Any AI Model
Developers Must Have the Skills to Keep AI in Check
Forward-thinking organizations are working with developers in applying a security-first mindset to the SDLC, in line with the goals of the US Cybersecurity and Infrastructure Security Agency’s (CISA’s) Secure-by-Design initiative. This includes a continuous program of agile, hands-on upskilling in sessions designed to meet developers’ needs. For example, training is tailored to the work they do in the programming languages they use, and it is available on a schedule that fits their busy workdays.
Better still, the security proficiency of humans and their AI coding assistants should be benchmarked, with security leaders able to access data-driven insights on both developer security proficiency and the security accuracy of any commits made with the assistance of AI tooling and agents. Would it not be beneficial to monitor who used what to better manage code review, or verify when we know a particular LLM is failing at specific tasks or vulnerability classes?
An effective upskilling program not only helps ensure that developers can create secure code, but also that they are equipped to review AI-generated code, identifying and correcting flaws as they appear. Even in this new era of AI-generated coding, skilled human supervision remains essential. And CISOs must prioritize equipping their critical human workforce with those skills.
About the Author
Co-Founder & CTO, Secure Code Warrior
Matias Madou is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company, Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences, including RSA Conference, Black Hat, DEF CON, BSIMM, OWASP AppSec, and BruCon.
Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.