Connectivity has evolved through successive architectural envelopes, each dissolving silos and expanding how systems communicate, compute and coordinate. From circuit-switched telephony to packet-switched, to cloud-native, and now AI-native infrastructure, each stage has broadened the scope of networking.
In our earlier CACM article, “What Lessons Can We Learn from the Internet for AI/ML Evolution,” we outlined seven principles: simplicity, layering, openness, end-to-end design, resilience, and incremental evolution, and neutral governance, that shaped the Internet’s growth. In this part II of our article series, our discussion turns to how those principles have guided the architectural pr…
Connectivity has evolved through successive architectural envelopes, each dissolving silos and expanding how systems communicate, compute and coordinate. From circuit-switched telephony to packet-switched, to cloud-native, and now AI-native infrastructure, each stage has broadened the scope of networking.
In our earlier CACM article, “What Lessons Can We Learn from the Internet for AI/ML Evolution,” we outlined seven principles: simplicity, layering, openness, end-to-end design, resilience, and incremental evolution, and neutral governance, that shaped the Internet’s growth. In this part II of our article series, our discussion turns to how those principles have guided the architectural progression from packets that unified transport, to workloads that enabled programmable orchestration, and now to agents that embed intelligence into the fabric itself. Understanding this continuum is key to designing open, resilient, and trustworthy AI-native infrastructure.
1. Connectivity as an Evolving Substrate
For over five decades, communication networks have advanced through a sequence of architectural evolutions, each expanding what connectivity means:
- IP-native networking unified services on a common packet transport fabric.
- Cloud-native networking abstracted infrastructure into programmable workloads.
- AI-native networking embeds intelligence into the fabric through predictive, generative, agentic, semantic, and physical capabilities.
These architectures are cumulative, not sequential. Each build upon the last: AI-native systems continue to rely on packet-switched substrates and cloud-native orchestration. Together, they expand the network’s role: from rigid transport systems to programmable, elastic platforms, and now toward an intelligent infrastructure that integrate all three.
2. IP-Native Networking: Circuits to Packets to Programmable Fabrics
Building on the continuum outlined above, the first architectural envelope, IP-native networking, established the foundation by standardizing syntax, defining how systems communicate across heterogeneous media. Cloud-native architectures later standardized orchestration, coordinating how computation, storage, and connectivity scale together. AI-native networks now seek to standardize semantics, enabling intelligent entities to share meaning and align intent. Each architectural generation added new capability while preserving what came before. This progression, from syntax to orchestration to semantics, marks the expanding cognitive reach of connectivity. It traces how networks have evolved from transporting data to mediating understanding.
The first architectural envelope represented a decisive shift from multiple service-specific circuit-switched networks to a converged IP network. Here, packet has become the basic unit of abstraction: a self-contained digital envelope carrying payload and addressing information that could traverse any medium and be reassembled anywhere.
- Circuits: In the early days of TDM/Circuit Switching, voice, video, and data each ran on separate infrastructures such as PSTN for voice, coaxial cable for video, and PSTN and dedicated circuits for data. These were non-interoperable and hard to scale.
- Packets: Packet switching carried voice, video, and data over one IP network. Different types of communication could share the same transport layer, which opened the door to the Web, email, streaming media, and online commerce. That simple architectural idea, breaking information into packets and sending them independently, made global interoperability possible and gave people everywhere easier access to information.
- Programmable Fabrics: Traditional IP networks, though converged, were rigid. Network devices such as routers and switches were monolithic and configured manually using CLI (Command-Line Interface), one command at a time. SDN (Software-Defined Networking) changed this by separating control and forwarding planes. APIs enabled dynamic flow allocation, implemented at Google in its B2 and B4 WANs and 5G’s service-based packet core. SDN also introduced the P4 programming language to facilitate programmable controls.
This envelope established IP as the universal service substrate, that is scalable, interoperable and programmable. By making the packet the basic unit of abstraction, IP-native networking provided a logical foundation on which all subsequent architectural layers would build.
3. Cloud-Native Networking: Workloads as Atomic Units of Infrastructure
The next architectural envelope expanded the abstraction from transporting data to orchestrating distributed computation. While IP networks focused on the reliable delivery of packets across heterogeneous systems, cloud-native systems manage how those packets are processed, stored, and coordinated across programmable infrastructure. This evolution built on packet-switched foundations, extending networks from systems that carried traffic to platforms that host, scale, and orchestrate workloads across distributed environments. This evolution reshaped how networks were built and scaled.
In traditional networks, functions such as firewalls, NAT, and packet cores were delivered as tightly integrated hardware appliances. Scaling required deploying additional boxes or chassis, coupling growth to physical infrastructure. Cloud-native design dissolved these constraints. Virtualization and containers separated software from hardware, turning network functions into portable workloads such as Virtual Network Functions (VNFs) and Cloud-native Network Functions (CNFs), each with its own identity, network links, and control lifecycle.
As workloads became the basic unit of infrastructure, networks needed new ways to connect and scale them. Cloud-native design met that need through three linked advances that made infrastructure programmable and adaptive:
- Networking specifics: Kubernetes and the Container Network Interface (CNI) defined the rules for how workloads connect and communicate. Service meshes like Istio and Envoy changed the way distributed applications communicate. They made it possible to handle routing, load balancing, and observability across hundreds of microservices without embedding that logic in every component. In the telecom world, this same idea gave rise to Cloud-Native Network Functions (CNFs), enabling operators to deploy the 5G core as a collection of lightweight, containerized services that scale independently.
- Unified orchestration: As workloads became the basic atomic unit, orchestration unified previously siloed compute, storage, and networking domains as one coordinated system. Orchestration frameworks, like Kubernetes, made it possible to provision, scale, and configure resources across three domains.
- Programmable elasticity: Cloud-native systems transformed the underlying infrastructure into a programmable service fabric. Systems that once required manual tuning could now expand or contract automatically. This principle shows up in 5G slicing platforms, microservice clusters, and edge inference engines, where functions spin up, relocate, or scale on demand to meet real-time network conditions.
Cloud-native principles didn’t replace networking; they enriched it, transforming static systems into programmable fabrics capable of hosting dynamic elastic workloads at scale.
4. AI-Native Networking: Agents as the Cognitive Fabric
The next architectural envelope builds upon the programmability and elasticity of cloud-native systems by embedding cognition into the fabric itself. While cloud-native networks automated the scaling and orchestration of workloads, they remain largely reactive, responding to congestion or thresholds after they occur. As applications evolve toward autonomous systems, real-time control, and distributed AI inference, this reactive model can no longer keep pace.
AI-native networking extends these foundations with distributed intelligence, enabling networks not only to execute intent but to understand and anticipate it. The abstraction now expands from workloads to intelligent (or agentic) workloads, autonomous entities capable of perceiving context, reasoning over intent, and coordinating with peers. Agents are not replacements for workloads, but their intelligent evolution: workloads that can sense, decide, and act in real time.
As agents become the operational unit, intelligence shifts from being an application overlay to an intrinsic property of the infrastructure itself. The network no longer just hosts intelligence; it participates in it. This evolution introduces a semantic control plane that augments the traditional data and management planes, allowing intent, meaning, and context to be exchanged natively within the fabric. Agents interact through this semantic layer to align goals and coordinate actions across domains.
Agents rely on capabilities such as:
- Semantic routing, where forwarding decisions reflect intent and contextual meaning.
- Intent propagation, allowing high-level goals to cascade across cooperating agents.
- Context synchronization, maintaining a shared situational view among distributed entities.
- Trust and provenance, verifying authenticity and integrity of exchanged information.
- Reflective telemetry, linking observation to reasoning by exposing machine-interpretable state.
Together, these mechanisms transform networks from programmable to perceptive systems: active, context-aware fabrics capable of perception, reasoning, and coordination.
On this foundation, six core functions define how agentic workloads, the intelligent descendants of cloud workloads, operate. Each extends a capability introduced in the cloud-native era with a new dimension of cognition: prediction, generation, reflection, and action.
- Predictive intelligence: Models forecast congestion, faults, or energy surges so the system can adapt before impact.
- Generative automation: AI tools generate deployment templates, workflows, and recovery scripts automatically.
- Agents: Lightweight processes interpret intent and apply policies within one domain like radio, packet core, closing the loop between data and action.
- Agentic AI: Specialized agents cooperate across radio, transport, and edge domains, negotiating outcomes in real time.
- Reflective modeling: Continuous simulation validates AI-driven changes before rollout, reducing operational risk
- Physical AI: Intelligent endpoints like vehicles, drones, and robots participate directly in the control loop, fusing communication, compute, and sensing domains to extend cognition into the physical world
Through these extensions, AI-native networks enable closed-loop autonomy: sensing, predicting, reasoning, and acting across layers and domains. The network becomes a distributed cognitive organism, learning from experience and adapting its behavior continuously across the digital and physical continuum.
5. Architectural Parallels: From Monolithic Systems to Distributed Fabrics
Across generations of technology, a familiar architectural rhythm has shaped progress. Systems often begin as centralized and monolithic, tightly coupled designs optimized for control and predictability. As they mature, those boundaries dissolve. What once operated as a single unit becomes a distributed, composable fabric, capable of scaling, adapting, and collaborating. This pattern has repeated across three domains that define our digital era: networking, computing, and now, artificial intelligence. Each has evolved by distributing capability and trust, expanding connectivity from the exchange of information to the coordination of collective action.
Viewed together, these shifts reveal a common principle: progress in connectivity is driven by the distribution of intelligence. In networking, intelligence first spread through packets, which made communication resilient by breaking information into addressable, independent units. In computing, the same idea reappeared as workloads, which modularized software into orchestrated components that could run anywhere. Today, in AI, intelligence itself is being distributed through agents: autonomous systems that perceive context, reason about intent, and act collaboratively within a shared environment.
Each phase pushed the boundary of what could be connected, controlled, and coordinated. The Internet’s layered architecture offers a clear illustration of this shift. In its early years, computing was built around mainframes, large, self-contained systems with limited means of communication. Data exchange was manual, interfaces proprietary, and interoperability rare. The introduction of standardized protocols, from physical transmission to TCP/IP and the Web, transformed that landscape. Once computers could speak a common language, they no longer needed to be managed centrally. Networking became distributed by design, allowing diverse systems to cooperate across distance and vendor boundaries. That syntactic unification, bits into packets, packets into flows, created a foundation for global scale.
As the Internet matured, attention turned from connecting systems to coordinating how computation itself occurred. The rise of cloud-native computing extended the same logic of distribution from networking to computation. Large, monolithic applications gave way to microservices: smaller workloads encapsulated in containers, orchestrated through platforms like Kubernetes. Instead of scaling by adding boxes, systems scaled by adding workloads. Network, compute, and storage functions that each once coexisted on a single machine became distributed workloads spread across data centers and edge nodes. APIs connected them, orchestration managed them, and automation kept them elastic.
In this evolution, infrastructure itself became programmable, a distributed operating system for the planet. These programmable fabrics bridged the gap between connectivity and computation, allowing workloads to move fluidly across heterogeneous resources. In doing so, they prepared the ground for the next transition: from orchestrating workloads to orchestrating intelligence itself.
Artificial intelligence now represents the next turn in this sequence. Early AI systems resembled the mainframes of computing’s past: large, self-contained models performing entire tasks in isolation, whether generating text, classifying images, or translating language. The next phase introduced tool-using models, capable of invoking APIs, retrieving data, and interacting with external systems. That evolution set the stage for AI agents, which move beyond isolated intelligence toward collective distributed cognition and control. Instead of one model serving many tasks, many specialized agents now collaborate toward shared goals: retrieving, reasoning, planning, and acting as a networked system. Each agent contributes localized intelligence and decision authority, allowing control to emerge collectively rather than hierarchically. In this way, intelligence became not only distributed in computation but decentralized in command, extending autonomy from the data center to the edge.
Where packets distributed communication and workloads distributed computation, agents are distributing control and cognition across the network itself. Together, these transformations trace a continuous arc: from distributed networking to distributed computation to distributed cognition, expanding connectivity from a medium of exchange to a medium of coordinated understanding. What began as a network of machines has become a network of minds: systems that not only share data, but also align purpose.
6. Conclusion and Reflections
Across generations, networking, computing, and control (AI) have evolved through a common architectural rhythm: from centralized, monolithic systems to distributed, composable fabrics that expand scale, flexibility, and collaboration. As we saw at the outset, each architectural envelope has expanded what networking means: first for communication, then for computation, and now for cognition. Networking became distributed through packets, computing through workloads, and control through intelligent agents.
Together, they trace an unbroken arc of progress: from distributed networking to distributed computation, and now to distributed intelligence and control, showing how connectivity has evolved from simple exchange to coordinated action and collective cognition.
Each architectural envelope introduced new capabilities while retaining and enriching its predecessors. Packets unified transport, dissolving boundaries between heterogeneous networks. Workloads unified orchestration, abstracting infrastructure into programmable, adaptive systems. Agents now unify intelligence, embedding cognition and coordination directly into the fabric. Through this additive evolution, communication, computation, and cognition are converging into a single, intelligent substrate that can sense, reason, and act as one.
The design principles that once scaled the Internet (simplicity, openness, layering, and resilience) must now guide its cognitive evolution. Scalability in cognition, like scalability in communication, depends on shared semantics, modular design, open interfaces, and transparent trust frameworks that can evolve without disruption. As networks embed intelligence, these principles must extend into the semantic domain, encompassing explainability, ethics, and accountability, ensuring that transparency grows alongside capability.
The central challenge is to embed intelligence without eroding the qualities that made the Internet resilient: modular design, incremental evolution, and shared stewardship of a common substrate. Progress endures when silos dissolve, abstractions rise, and shared understanding binds an expanding web of intelligent systems.
Looking ahead, the next part of this series turns from architecture to engineering. Part III, “Engineering the Continuum,” will explore how these design principles translate into operational constructs semantic routing, reflective telemetry, and distributed cognition, making intelligence not an overlay on the network, but a native property of its design.
Mallik Tatipamula is Chief Technology Officer at Ericsson Silicon Valley. His career spans Nortel, Motorola, Cisco, Juniper, F5 Networks, and Ericsson. A Fellow of the Royal Society (FRS) and four other national academies, he is passionate about mentoring future engineers and advancing digital inclusion worldwide
***Vinton G. Cerf *is vice president and Chief Internet Evangelist at Google. He served as ACM president 2012-2014.