By Esteban S. Abait — November 2025
The following article is what I would call the summarization of my own research. This 2025, I have been deep into understanding how AI Agents are changing the Software Development Life Cycle (SDLC) and how they are transforming the workflows developers use to deliver software. The article also mixes some of my opinions based on my own side project experiences (like this one).
There is a common agreement in the industry that new best practices for coding and software engineering are emerging as a result of adopting AI to accelerate the SDLC.
Ever since I read the seminal book Accelerate, I…
By Esteban S. Abait — November 2025
The following article is what I would call the summarization of my own research. This 2025, I have been deep into understanding how AI Agents are changing the Software Development Life Cycle (SDLC) and how they are transforming the workflows developers use to deliver software. The article also mixes some of my opinions based on my own side project experiences (like this one).
There is a common agreement in the industry that new best practices for coding and software engineering are emerging as a result of adopting AI to accelerate the SDLC.
Ever since I read the seminal book Accelerate, I have closely followed the DORA (DevOps Research and Assessment) reports to identify which practices actually work for software organizations. The DORA reports are based on rigorous statistical analysis of large-scale surveys of thousands of technology professionals worldwide. Their mission is to identify practices that predict software delivery performance and other organizational outcomes.
The goal of this article is to review a few real-world documented experiences from developers experimenting with AI coding agents and to contrast them with the latest DORA 2025 State of AI-Assisted Software Development Report.
The selected articles reveal how developers are changing their day-to-day workflows through agentic, parallel, and vibe-coding loops. Comparing the findings of these articles against DORA’s insights sheds light on what is truly working and what new ideas are emerging from the field that can be leveraged in enterprise settings.
1. Developers are building agentic loops, not just prompts
In his post Designing Agentic Loops, Simon Willison argues that the future of AI-assisted development lies in goal-driven loops, not one-off prompts:
Analyze → Plan → Implement → Test → Review → Iterate
Rather than treating AI as autocomplete, developers now design closed feedback loops where agents can reason, test, and refine outputs.
DORA 2025 backs this: throughput rises when feedback loops exist — but stability drops if they’re missing or unmanaged.
The Kim & Yegge essay The Vibe Coding Loop extends this idea: vibe-coding reframes development as continuous conversation and orchestration.
The developer’s role becomes less about typing code and more about directing agents through intent, goals, and feedback.
It’s still engineering — but at a higher altitude.
2. Quality internal platforms are the new accelerators
The DORA report emphasizes that Quality Internal Platforms (QIP) amplify AI’s benefits. So, what is a QIP? A QIP is defined as a set of shared systems, services, and code artifacts that standardize and abstract best practices, enabling developers to build and deliver reliable, secure applications quickly and independently. Typically, this platform consists of shared CI/CD, standard pipelines, self-service tools, guardrails, observability tools, and access to development environments, among others.
In The New Calculus of AI-based Coding, Joe Magerramov explains how his team achieves ~80% AI-generated commits through a strictly controlled environment with “steering rules,” review gates, and fast pipelines.
Freedom without structure leads to chaos; structure with autonomy enables safe speed.
Magerramove experience matches what DORA considers a QIP and draws a similar conclusion where the developer platform boosts AI’s positive impact and creates psychological safety for teams to experiment responsibly.
3. Context is the new fuel
Every modern reflection on agentic coding echoes the same point: context beats clever prompting.
In Just Talk To It, Peter Steinberger notes that agents should “read the code, docs, and tests” instead of being micromanaged through verbose prompts.
Magerramov adds that internal test harnesses, mocks, and dependency fakes provide the “scaffolding” that lets agents iterate confidently.
The vibe-coding framework described in The Key Vibe Coding Practices extends this: developers maintain a shared context across agents (what Kim & Yegge call the “vibe space”) so that multiple agents operate within the same state of understanding.
That shared context is exactly what DORA categorizes as AI-Accessible Internal Data. As stated in the DORA report: Connect your AI tools to your internal systems to move beyond generic assistance and unlock boosts in individual effectiveness and code quality. This means going beyond simply procuring licenses and investing the engineering effort to give your AI tools secure access to internal documentation, codebases, and other data sources. This provides the company-specific context necessary for the tools to be maximally effective.
4. Clarity builds trust (and reduces friction)
Willison advocates sandboxing and “blast radius” controls; Magerramov requires every AI commit to be reviewed by a human.
This is not bureaucracy — it’s how teams build trust.
DORA calls this a Clear + Communicated AI Stance: A
clear and communicated AI stance“ refers to the comprehensibility and awareness of an organization’s official position regarding how its developers are expected and permitted to use AI-assisted development tools.
In a nutshell, organizations must publish policies for tools, scope, and security, and officially foster their adoption. When developers know the rules, they experiment more boldly.
5. Parallel agents and small batches: the new flow unit
The next frontier in agentic practice is parallelism.
In Embracing the Parallel Coding Agent Lifestyle, Willison describes running multiple agents side-by-side, each attempting a feature or fix, then comparing and merging results.
The Pragmatic Engineer newsletter calls this “programming by kicking off parallel agents” where one agent explores approach A, another tests approach B, and the developer acts as reviewer/orchestrator.
This fits beautifully with DORA’s findings: Working in Small Batches and Strong Version Control Practices improve delivery speed and stability.
Parallel agents could potentially scale the small-batch principle horizontally multiple independent changes, fast feedback, safe integration.
However, more research will be needed to ensure parallel agents provide a real gain in productivity.
6. UX and developer experience define adoption
Kim & Yegge’s Vibe Coding Loop positions the developer as a conductor of agents, using language, emotion, and intent to shape outcomes.
Steinberger’s observations about cognitive load reinforce this: for creative tasks, AI should respond to intent, not flood you with completions.
Magerramov adds mode-switching with chat for ideation, completions for boilerplate, and agentic commits for execution. This pattern is what DORA labels as User-Centric Focus.
Together, vibe-coding and parallel-agents illustrate a new UX layer: AI is not a single assistant but an ensemble you conduct.
7. Value appears when the loop connects to the business
At the enterprise level, DORA stresses Value Stream Management (VSM). VSM is described as the practice of visualizing, analyzing, and improving the flow of work from idea to customer. This practice involves charting the entire software delivery lifecycle, which covers: product discovery, design, development, testing, deployment, and operations.
The most significant finding regarding VSM in the context of the State of AI-assisted Software Development (2025) is that it acts as the force multiplier that ensures AI investment delivers a competitive advantage. The use of VSM should help organizations to track how AI affects lead time, rework, and deployment frequency. In essence, VSM provides the necessary clarity and system view required to unlock new technologies like AI, preventing them from creating “disconnected local optimizations” and instead ensuring they translate into significant organizational advantage.
This finding is also supported by other experiences. Magerramov calls this the “cost-benefit rebalance”: as agentic loops increase velocity, you must upgrade testing and infrastructure to maintain trust.
Vibe-coding adds a human dimension here: teams should measure not just velocity but quality of flow — how collaboration feels and how developers experience creative momentum.
8. The Unified Playbook
The following is a table that summarizes the findings of all the surveyed sources for successful AI Agents adoption.
| Focus Area | Practical Action | Expected Effect |
|---|---|---|
| Policy & Trust | Publish clear AI usage policy; require human review for agent commits. | Builds confidence and reduces risk. |
| Context Grounding | Connect AI tools to internal repos, tests, and “vibe spaces.” | Improves accuracy and code quality. |
| Platform Engineering | Invest in strong internal platforms and fast feedback loops. | Amplifies AI impact, reduces friction. |
| Flow Efficiency | Adopt agentic loops and parallel agents; prefer small batches. | Improves throughput and stability. |
| Human Oversight | Keep reviewers in the loop; instrument metrics of trust. | Controls instability and builds learning. |
| Developer UX | Embrace vibe-coding principles — intent-driven orchestration, mode switching. | Reduces cognitive load, enhances flow. |
| Value Measurement | Use VSM metrics and developer experience surveys to measure impact. | Converts speed into sustainable value. |
Final Takeaway
AI coding agents aren’t replacing developers, they’re reshaping how developers work.
The modern workflow is a blend of agentic loops that are best leveraged by internal quality platforms, grounded in the seven DORA capabilities that make AI effective in real teams.
The teams winning with AI aren’t chasing full autonomy; they’re mastering value streams, working in small chunks, creating feedback loops, and using mature internal development platforms with clear policies and agents grounded in a shared internal context.
References
- Designing Agentic Loops — Simon Willison
- Embracing the Parallel Coding Agent Lifestyle — Simon Willison
- Just Talk To It — Peter Steinberger
- The Vibe Coding Loop — IT Revolution
- The Key Vibe Coding Practices — IT Revolution
- Programming by Kicking Off Parallel Agents — The Pragmatic Engineer
- The New Calculus of AI-based Coding — Joe Magerramov
- 2025 DORA State of AI-Assisted Software Development Report
Acknowledgment: This article was researched and drafted in collaboration with ChatGPT (GPT-5), used as a co-writer and technical synthesis partner.