4 Min Read

Source: Elnur Amikishiyev via Alamy Stock Photo
OPINION
The hype surrounding AI in software development is undeniable. We are witnessing a paradigm shift, where "vibe coding" — expressing intent in natural language and leveraging AI large language models (LLMs) or agents to generate and refine code — is rapidly gaining traction. This approach promises unprecedented speed, lower barriers to entry, and accelerated prototyping.
Yet, as a cybersecurity professional, I see a critical caveat…
4 Min Read

Source: Elnur Amikishiyev via Alamy Stock Photo
OPINION
The hype surrounding AI in software development is undeniable. We are witnessing a paradigm shift, where "vibe coding" — expressing intent in natural language and leveraging AI large language models (LLMs) or agents to generate and refine code — is rapidly gaining traction. This approach promises unprecedented speed, lower barriers to entry, and accelerated prototyping.
Yet, as a cybersecurity professional, I see a critical caveat: vibe coding’s velocity often comes at the expense of the controls that safeguard our digital infrastructure.
From an engineering perspective, vibe coding presents an alluring evolution. It removes friction, enabling ideas to move from concept to production with remarkable efficiency. However, this rapid generation challenges the foundational principles of robust software engineering: intentional design, modularity, and readability.
Code is more than a set of instructions for a machine; it serves as a critical artifact for developers and organizations, documenting logic, intent, and design decisions. Left unchecked, vibe coding risks replacing disciplined communication with "good enough" code that may pass initial tests but is inherently unmaintainable and insecure.
The security implications are significant. When anyone can create code with AI tools, the software engineer’s role shifts from creation to validating intent, safety, and integrity. This evolution from building to curating introduces significant risk.
Related:Supply Chain Attacks Targeting GitHub Actions Increased in 2025
Developer Role Evolving
Unmanaged vibe coding does not just introduce new vulnerabilities — it amplifies existing ones. Long-standing issues impacting open source security and supply chain risk that we have struggled with for years are compounded by AI-generated code. Vibe coding exacerbates these issues while adding LLM-specific risks, like hallucinations, inconsistent outputs, and the potential for prompt or tool misuse.
Shipping AI-generated applications without rigorous, skilled review invites systemic failures across the entire Software Development Life Cycle (SDLC). When underlying logic is not examined and if thorough code review is not undertaken, the attack surface expands in unpredictable ways which often go unnoticed until it is too late.
The relentless drive to accelerate software delivery with AI assistance creates a widening gap between productivity and security. This is the velocity-versus-veracity trade-off: teams can iterate and prototype at lightning speed, but code quality and security often lag dangerously behind. Recent studies indicate that while AI-generated code is becoming more accurate, its security posture is not improving at the same pace.
Related:Copilot’s No-Code AI Agents Liable to Leak Company Data
This growing reliance on AI, often by individuals without formal development training, threatens to erode critical problem-solving skills and produces brittle, vulnerable codebases. Role shifts are inevitable: developers are transitioning from code authors to system integrators and reviewers, while application security professionals pivot to prompt and policy design, model and tool governance, and the implementation of AI-specific SDLC security controls.
Vibe coding itself is not inherently dangerous, but unchecked vibe coding absolutely is. As AI-assisted development becomes the norm, it demands a significantly higher level of application security maturity. Developers must evolve their approach to these tools and their roles within the development process.
The future of AI-assisted coding lies in merging creativity with verification and security controls to balance speed with secure discipline. To achieve this, organizations must implement robust guardrails and treat AI-generated code with the same, if not greater, scrutiny as any third-party contribution.
Security practices aligned to NIST Secure Software Development Framework (SSDF), OWASP and Center for Internet Studies (CIS)
Related:Microsoft Fixes Exploited Zero Day in Light Patch Tuesday
Gate AI-generated code with security checks and controls 1.
Implement input-output controls to mitigate prompt misuse 1.
Organizational training and governance
These practices are not optional; they are essential to ensure that AI-generated code is not secure, maintainable, and accountable. As developers transition from writing code to curating and validating AI output, these controls are required to maintain software integrity across the entire SDLC.
Governing the Future of AI-driven Development
Vibe coding is more than a trend — it is transforming the software development landscape. By accelerating innovation and enabling unprecedented speed, AI-assisted coding introduces new opportunities and responsibilities. These tools provide faster delivery, enhance creativity, and help solve complex problems. However, this evolution comes with new layers of complexity and risk that cannot be ignored.
As AI tools become deeply embedded in development workflows, the roles of development teams and AppSec professionals must evolve. This is not purely a technical shift — it is also cultural. It demands a mindset that blends creative exploration with disciplined security principles, and rapid iteration with accountability.
By implementing thoughtful controls and treating AI as an enabler and risk factor, organizations can harness the benefits of vibe coding without compromising safety, maintainability, or trust. The future of secure software development will depend not just on how fast we can build, but on how well we can govern what we build with AI.
There are immense challenges and opportunities and organizations that embrace the dual mandate of innovation and governance will lead in the next era of software development.
About the Author
Senior Cybersecurity Solution Architect, Black Duck
Chrissa Constantine is a seasoned cybersecurity professional with deep expertise in Application Security and a strong passion for securing modern software ecosystems. With years of experience in identifying, mitigating, and resolving vulnerabilities in complex applications, she has become a trusted voice in the industry.
Throughout her career, Chrissa has collaborated with cross-functional teams to drive secure coding practices, implement robust security frameworks, and bridge the gap between development and security operations. She brings a wealth of knowledge in vulnerability management, compliance, and emerging technologies, empowering organizations to strengthen their defenses in an evolving threat landscape.
A dynamic speaker and thought leader, Chrissa has delivered impactful presentations and workshops at conferences, sharing actionable insights and fostering industry-wide discussions about advancing application security practices. She has published numerous papers in international magazines and frequently speaks at meetings on topics related to application security, ransomware, and the strategic implementation of SBOM.