Three big cloud vendors announced earnings recently, with each accelerating their growth thanks to AI. Nvidia, for its part, became the first company to top $5 trillion in market cap, also thanks to AI. While there’s almost certainly some “irrational exuberance” baked into AI adoption, spurred by FOMO across global enterprises, the reality is that we’re nowhere near AI saturation.
Here’s why. AI has yet to touch mainstream applications at mainstream enterprises, and it won’t until it solves some critical (and boring) issues like security. As I’ve noted, “We may like the term ‘vibe coding,’ but smart developers are forcing the rigor of unit tests, traces, and health checks for agent plans, tools, …
Three big cloud vendors announced earnings recently, with each accelerating their growth thanks to AI. Nvidia, for its part, became the first company to top $5 trillion in market cap, also thanks to AI. While there’s almost certainly some “irrational exuberance” baked into AI adoption, spurred by FOMO across global enterprises, the reality is that we’re nowhere near AI saturation.
Here’s why. AI has yet to touch mainstream applications at mainstream enterprises, and it won’t until it solves some critical (and boring) issues like security. As I’ve noted, “We may like the term ‘vibe coding,’ but smart developers are forcing the rigor of unit tests, traces, and health checks for agent plans, tools, and memory.” They’re focused on “boring” because it’s key to real, sustainable enterprise adoption.
Getting past buzzwords
As an industry, we sometimes like to pretend that buzzwords drive adoption. “Open always wins,” declares Vercel founder Guillermo Rauch. He’s obviously wrong, as even a cursory history of technology adoption shows. There are some obvious success stories for open software (Linux, the Apache HTTP Server, etc.), but there are far more examples of closed systems winning. That’s not to say one is better than the other, but simply to point out how our casual indifference to how enterprise adoption actually works can blind us to the hard work necessary to drive real adoption.
The same is true of AI in the enterprise. You’ll hear folks like Abacus AI CEO Bindu Reddy slagging enterprise AI adoption, faulting air-quoted “security” concerns (as if they’re not real) and “AI committees” that are “stuck in analysis paralysis.” Sure. But folks with experience in the enterprise realize that, as Spring framework creator Rod Johnson put it, “Startups can risk building houses of straw. Banks can’t.” Yes, there’s enterprise bureaucracy, he acknowledges, but that’s partly because “security is a real thing,” not to mention privacy and regulation.
Smaller companies can pretend such things aren’t important, but that’s why they get stuck in early-stage proofs of concept and rarely hit mainstream production deployments.
Enthusiasm meets governance
Wharton’s 2025 AI Adoption Report is a good antidote to the “just go fast” mantra. The study—based on 800 enterprise decision-makers—found that “at least eight out of 10” use generative AI regularly today, up from “less than four out of 10” in 2023. Wow, right? Maybe, but fast adoption isn’t the same as safe deployment. The same report shows adoption leadership consolidating in the C-suite (60% of companies in the survey have a chief AI officer), with policies emphasizing data privacy, ethical use, and human oversight—the unsexy guardrails you need before you plug AI into real workflows.
Importantly, Wharton also highlights that “as genAI becomes everyday work, the constraint shifts from tools to people,” and that training, trust, and change management become decisive. That squares with what I argued recently: AI’s biggest supply-chain shortage isn’t GPUs; it’s people who know how to wield AI safely inside the business.
If you want a case study from a previous wave of “disruptive” tech, look no further than Kubernetes (appropriate given KubeCon is this week). Kubernetes didn’t become mainstream because it was cool. It became an enterprise standard when managed offerings normalized security and policy (and therefore governance), making it easier to operate in regulated environments. The Cloud Native Computing Foundation’s 2023/2024 surveys repeatedly show that applying policies consistently across cost, reliability, and security is a top concern. Again, boring governance is the path to real adoption.
How fast can we get to governed data?
Here’s how I say it in my day job running developer relations at Oracle: Although developers have long privileged convenience over most other considerations, AI starts to shift selection criteria from “spin up fast” to “get to governed data fast.” That favors technology stacks where your security controls, lineage, masking, and auditing already live next to your data. Spinning up a shiny model endpoint is trivial; connecting it safely to customer records laden with personally identifiable information, payment histories, and invoices is not. What does this mean?
- Data proximity beats tool novelty. Moving copies of sensitive data into new systems multiplies both risk and cost. Retrieval-augmented generation (RAG) that keeps data in-place, where encryption, role-based access controls (RBAC), and masking policies already apply, will beat RAG that shuttles CSVs to an unfamiliar vector store, no matter how “developer-friendly” it seems.
- Policy reuse is the killer feature. If your platform lets you reuse existing row/column-level policies, data loss prevention rules, and data-residency controls for prompts, embeddings, and tool use—without writing glue code—that offers enormous leverage. Wharton’s report shows that enterprises are explicitly codifying these guardrails as they scale.
- Human oversight requires observable AI. You can’t govern what you can’t see. Evaluation harnesses, prompt/version lineage, and structured logging of tool calls are now table stakes. That’s why teams are pushing “unit tests for prompts” and trace-level observability for agents. It’s boring but, again, it’s essential.
That may sound like a vote for legacy technology stacks, but it’s really a vote for integrated stacks. The shiniest new technology rarely wins unless it can inherit the boring controls enterprises already trust. This is the paradox of enterprise innovation.
Why ‘sexy’ loses to ‘secure’
Enterprise history keeps teaching the same lesson. When innovation collides with compliance, compliance wins—and that’s healthy. The goal isn’t to slow innovation; it’s to sustain it. Kubernetes only won once it got the guardrails. Public cloud only exploded after virtual private clouds, identity and access management, and key management services matured. Generative AI is repeating the pattern. Once security and other enterprise concerns are part of the default AI stack, adoption will move from developer excitement to earnings acceleration within the enterprise.
The headline across tech earnings calls is “AI, AI, AI.” The headline inside enterprise backlogs is “governance.” These aren’t really in conflict, except on X.
That’s why the most important performance optimization for AI in the enterprise isn’t a faster kernel or a slightly better benchmark. It’s a shorter path from idea to governed data. As I said earlier, AI is shifting selection criteria from “spin up fast” to “get to governed data fast.” The winners won’t be the stacks that look the coolest on day one. They’ll be the ones that make the boring stuff—security, privacy, compliance, observability—nearly invisible so developers can get back to building.