Last month, I had the privilege of attending Global NaaS Event (GNE) 2025 in Arlington, Texas. Over five days, more than 500 leaders across enterprise IT, service providers, technology vendors, cloud and data center operators, cybersecurity, systems integrators, media, and analysts came together to explore one theme: how Network‑as‑a‑Service (NaaS) is evolving to power AI at global scale. Here are some of the most impactful trends discussed:
**The network has become the AI bottleneck **
Across sessions and hallway conversations, a clear consensus emerged: networking, not compute, is increasingly the decisive constraint on AI growth. As agentic AI moves from batch inference to always‑on, interactive experiences, deterministic performance and programmable connectivity de…
Last month, I had the privilege of attending Global NaaS Event (GNE) 2025 in Arlington, Texas. Over five days, more than 500 leaders across enterprise IT, service providers, technology vendors, cloud and data center operators, cybersecurity, systems integrators, media, and analysts came together to explore one theme: how Network‑as‑a‑Service (NaaS) is evolving to power AI at global scale. Here are some of the most impactful trends discussed:
**The network has become the AI bottleneck **
Across sessions and hallway conversations, a clear consensus emerged: networking, not compute, is increasingly the decisive constraint on AI growth. As agentic AI moves from batch inference to always‑on, interactive experiences, deterministic performance and programmable connectivity determine whether organizations will realize ROI on their AI infrastructure investments. In other words, you can’t scale AI without rethinking how you scale the network.
Industry should think of mapping AI workloads to network intents: Translate model and application needs (latency ceilings, jitter floors, throughput windows, data sovereignty) into explicit network policies from day one.
**From AIOps to intent‑driven, self‑healing networks **
Operators showcased tangible progress in AIOps – prediction, correlation, diagnostics and steps toward intent‑driven operations, where desired outcomes (latency, jitter, throughput, security posture) drive automated changes end‑to‑end. The direction is unmistakable: observability and automation must span the full service lifecycle of ordering, activation, assurance, modification, and trouble resolution to sustain the velocity of AI demands.
Industry needs investing on AIOps and observability: Treat prediction, correlation, and self‑healing as core reliability features, not optional add‑ons.
**Building programmable, deterministic AI fabrics **
For AI clusters, edges, and clouds to coordinate at machine speed, the fabric itself must be programmable and deterministic. Leaders emphasized Carrier Ethernet with strict performance guarantees and 400G–1.6T wavelengths to create predictable, on‑demand connectivity across domains. This isn’t about raw bandwidth alone, it’s about consistent behavior under load and the ability to shape traffic with surgical precision.
Design for determinism should be way forward for industry: Favor programmable high capacity Carrier Ethernet and wavelengths with verifiable performance & consistency for AI speeds.
**Interoperability: LSO APIs and standardized payloads **
The ecosystem is converging on Lifecycle Service Orchestration (LSO) APIs, standardized payloads, and common processes to make multi‑provider automation practical. Standardization turns one‑off integrations into repeatable, scalable interconnections, accelerating time‑to‑value for enterprises and simplifying how providers coordinate complex services across regions and partners.
Automating the entire lifecycle is now much needed step for industry: Implementing APIs and standardized payloads to remove friction in ordering, activation, assurance, and change management.
**MCP and agentic automation across domains **
A promising development is the Model Context Protocol (MCP), enabling AI agents to securely access tools, data, and network resources. Think of MCP as an integration layer that lets autonomous workflows span networking, security, and operations – opening opportunities for cross‑domain automation that previously required human handoffs and custom engineering.
**Federation and marketplace models gain momentum **
To serve AI workloads with global reach, federated architectures and marketplace models are taking shape. With pluggable, standardized payloads, providers can extend service footprints, advertise capabilities, and coordinate provisioning across participants without sacrificing assurance or control. The result – faster access to the right capacity in the right place, at the right performance level.
**Trust as a first‑class feature: certification and security **
AI infrastructure now demands trust signals beyond marketing claims. Certification, performance validation, and cybersecurity posture are becoming table stakes. Initiatives like Carrier Ethernet for AI certification aim to validate services against more stringent parameters – higher MTU, tighter performance metrics, efficiency measurements, and uptime requirements, so buyers can select with confidence. In parallel, Zero Trust, SASE, and quantum‑safe capabilities are rising priorities to protect dynamic, data‑intensive AI ecosystems end‑to‑end.
In summary, GNE 2025 reinforced a simple truth: AI’s next wave will be limited not by imagination, but by infrastructure. The winners will be the organizations that treat the network as a programmable, certified platform – one that anticipates AI traffic patterns, proves its performance, and automates collaboration across ecosystems. If you’re building for agentic AI, start with the network, make it your competitive advantage.
Want to learn more about Arelion’s comprehensive suite of connectivity products, unified under one name to deliver networking solutions for AI workloads? Check out AI Direct.
Niroop Sanjeev Kumar, Global Product Manager