(© Stock Dignity - Canva)
Organizations are building AI systems faster than they can understand them. Investment is rising, experimentation is widespread, but scaling remains stuck. The barrier isn’t algorithms or infrastructure – it’s operational visibility at the speed and complexity AI demands.
This is the challenge Dynatrace and Deloitte are targeting in their new UK partnership. Michael Allen, Worldwide Vice President of Strategic Partners, explains that historically, observability tools were designed to help IT teams keep systems running. In a conversation ahead of Dynatrace Perform, he argues that traditional monitoring approaches are no longer sufficient for modern digital environments:
Traditional historical monitoring approaches have failed to deliver the kind of visi…
(© Stock Dignity - Canva)
Organizations are building AI systems faster than they can understand them. Investment is rising, experimentation is widespread, but scaling remains stuck. The barrier isn’t algorithms or infrastructure – it’s operational visibility at the speed and complexity AI demands.
This is the challenge Dynatrace and Deloitte are targeting in their new UK partnership. Michael Allen, Worldwide Vice President of Strategic Partners, explains that historically, observability tools were designed to help IT teams keep systems running. In a conversation ahead of Dynatrace Perform, he argues that traditional monitoring approaches are no longer sufficient for modern digital environments:
Traditional historical monitoring approaches have failed to deliver the kind of visibility across a full-stack capability – not just infrastructure, but the application… being able to trace end-to-end through every device, browser, web tier, app tier and data tiers.
The challenge is structural, not purely technical: connecting operational data silos to business goals. Observability becomes less about dashboards and alerts, and more about creating a shared operational language across technology, business, and governance.
Something more fundamental underlies this change. Traditional enterprise systems were deterministic – predictable, rule-based, and relatively stable. AI systems are probabilistic – dynamic, adaptive, and increasingly autonomous.
Allen explains how this changes enterprise integration: where systems once operated on deterministic if-then-else logic, AI-to-AI connections now run on probabilistic inference. Without observability, there’s no oversight - no way to detect drift, assess reliability, or control costs. In this context, observability becomes infrastructure for governing autonomous behaviour.
Observability as operational oversight
Allen traces observability’s evolution from a support tool to something closer to an operational oversight layer:
In the past it was a technology for IT operations to keep the lights on. Today, partners like Deloitte are bringing that alive into an AI operations control plane.
This layer performs three core functions: visibility into system behaviour across complex environments, validation of performance and reliability, and intervention when systems drift or fail. Allen argues this requires platforms that correlate data automatically rather than relying on humans to connect silos manually.
You can’t put data from observability tools onto glass and have humans correlating across silos. You need a single platform that is constantly on, with AI built in.
While fully autonomous operations aren’t universal, self-healing systems are already viable in specific contexts. Allen suggests that if AI can reliably detect problems and identify root causes, resolution can be automated without human intervention - though he acknowledges most organizations aren’t quite there yet.
Ownership and the environment question
As observability becomes more central to digital operations, questions of ownership become more complex. In managed services models, organizations expect providers to ensure operational visibility. Meanwhile, many organizations prefer retaining platform control to maintain flexibility with future partners.
Enterprises are saying, ‘We want to own the platforms, the integrations and the automations, so we have the freedom to work with different partners in the future.’
This tension points to what Allen calls business observability, where telemetry is used not only to monitor systems but to understand customer journeys and operational bottlenecks. Business leaders increasingly run operations with real-time insight into user journeys, making observability a strategic asset rather than a technical tool.
He argues that observability, security, and operational resilience have become inseparable – "imperative to business reputation and even the ability to stay in business." This convergence reflects findings from Dynatrace’s State of Observability 2025 report, which shows organizations using AI to strengthen security operations, manage AI governance, and automate data workflows.
This requires continuous dynamic tracing: capturing every transaction in real time to enable retrospective security analysis, cost attribution for AI workloads, and accurate configuration management. Without this, Allen warns, outdated configuration data means "you fix the wrong infrastructure." Dynamic tracing keeps that operational map accurate even as cloud environments scale up and down to meet demand.
Deloitte’s role in the partnership extends beyond technology implementation. Dynatrace believes that the firm brings enterprise architecture expertise and change management capability to help organizations reskill talent, redesign processes, and navigate the organizational dimension of AI transformation – challenges that often prove more difficult than the technical ones.
My take
The Dynatrace–Deloitte collaboration could be dismissed as another vendor–consultancy alliance in an increasingly crowded AI market. But Allen’s comments suggest something more structural is happening beneath the surface. Enterprise AI is not struggling because organizations lack ambition or algorithms. It is struggling because digital systems are evolving faster than the governance models designed to manage them. Observability is therefore shifting from a monitoring function to operational infrastructure - the means by which organizations attempt to understand, govern, and ultimately trust increasingly autonomous systems. When software becomes probabilistic rather than deterministic, the challenge shifts from building systems to making their behavior visible, accountable, and economically predictable.
There’s also a deeper tension around ownership. Organizations want control over platforms and data, while partners increasingly shape how AI systems are integrated, operated, and governed. Observability sits at the center of this dynamic – not just as a technical capability, but as a mechanism of oversight and a source of strategic influence over how AI actually functions in production.
What makes observability distinct from traditional monitoring is its scope. It’s being repositioned to span security, compliance, cost management, and business operations - becoming what Allen describes as a "control plane" for AI transformation. This makes observability a shared capability across functions that have historically operated in silos.
Whether this partnership succeeds matters less than whether the underlying model is viable. Organizations need operational visibility before they can trust autonomous systems - that thesis holds. But it assumes enterprises can reorganize around observability as shared infrastructure, breaking down decades-old functional silos. The track record suggests otherwise. Many organizations are investing in AI but struggle to can’t execute the deeper transformations needed to make it work, let alone reimagine operations around AI governance. The constraint may not be tooling, governance frameworks, or consultancy expertise. It may be that enterprises fundamentally lack the organizational pliability AI demands.
I’ll be at Dynatrace Perform in Las Vegas this week, exploring how these dynamics are playing out in practice through customer use cases and executive perspectives on the operational realities of enterprise AI.