The investments NSF made in SDN over the past two decades helped revolutionize network design and operation across public and private sectors.
Posted Oct 24 2025

The Internet underlies much of modern life, connecting billions of users via access networks across wide-area backbones to countless services running in datacenters. The commercial Internet grew quickly in the 1990s and early 2000s because it was relatively easy for network owners to connect interoperable equipment, such as routers, without relying on a central administrative authority. However, a small number of router vendors controlled both the hardware and the software on these devices, leaving network owners with âŚ
The investments NSF made in SDN over the past two decades helped revolutionize network design and operation across public and private sectors.
Posted Oct 24 2025

The Internet underlies much of modern life, connecting billions of users via access networks across wide-area backbones to countless services running in datacenters. The commercial Internet grew quickly in the 1990s and early 2000s because it was relatively easy for network owners to connect interoperable equipment, such as routers, without relying on a central administrative authority. However, a small number of router vendors controlled both the hardware and the software on these devices, leaving network owners with limited control over how their networks behave. Adding new network capabilities required support from these vendors and a multi-year standardization process to ensure interoperability across vendors. The result was bloated router software with tens of millions of lines of code, networks that were remarkably difficult to manage, and a frustratingly slow pace of innovation.
All of this changed with software-defined networking (SDN), where network owners took control over how their networks behaved. The key ideas were simple. First, network devices should offer a common open interface directly to their packet-forwarding logic. This interface allows separate control software to install fine-grained rules that govern how a network device handles different kinds of packets: which packets to drop, where to forward the remaining packets, how to modify the packet headers, and so on. Second, a network should have logically centralized control, where the control software has network-wide visibility and direct control across the distributed collection of network devices. Rather than running on the network devices themselves, the software can run on a separate set of computers that monitor and control the devices of a single network in real time.
The first commercial deployments of SDN started around 2008, and its success can be traced back to two intertwined developments that reinforced each other. The first was academic research funded mostly by the U.S. National Science Foundation (NSF). The second was cloud companies starting to build enormous datacenters, which required a new kind of network to interconnect thousands of racks of servers. In a virtuous cycle, the adoption of SDN by the hyperscalers drove further academic research, which in turn created more research, important new innovations, and several successful start-up companies.
As a result, SDN revolutionized how networks are built and operated todayâthe public Internet, private networks in commercial companies, university networks and government networks, and all the way through to the cellular networks that interconnect our smartphones.
Early NSF-funded SDN research. In 2001, a National Academies report, Looking Over the Fence at Networks: A Neighborâs View of Networking Research,30 pointed to the perils of Internet ossification: an inability of networks to change to satisfy new needs. The report highlighted three dimensions of ossification: intellectual (backward compatibility limits creative ideas), infrastructure (it is hard to deploy new ideas into the infrastructure), and system (rigid architecture led to fragile, shoe-horned solutions). In an unprecedented move, the NSF set out to address Internet ossification by investing heavily over the next decade. NSF investments laid the groundwork for SDN. We describe NSF investments here, through the lens of the support we received in our own research groups. Importantly, these and other government-funded research programs fostered a community of researchers that together paved the way for commercial adoption of SDN in the years that followed.
100Ă100 project: In 2003, the NSF launched the 100Ă100 project as part of its Information Technology Research program. The goal of the 100Ă100 project was to create communication architectures that could provide 100Mb/s networking for all 100 million American homes. The project brought together researchers from Carnegie Mellon, Stanford, Berkeley, and AT&T. One key aspect of the 100Ă100 project was the design of better ways to manage large networks. This research led to the 4D architecture for logically centralized network control of a distributed data plane21 (which itself built upon and generalized the routing control platform work at AT&T15), Ethane (a system for logically centralized control of access control in enterprise networks),11 and OpenFlow (an open interface for installing match-action rules in network switches),28 as well as the creation of the first open source network controller, NOX.22
Global Environment for Network Innovation (GENI): NSF and researchers wanted to try out new Internet architectures on a nationwide, or global, platform. Computer virtualization was widely used to share a common physical infrastructure, so could we do the same for a network? In 2005, âOvercoming the Internet Impasse through Virtualizationâ proposed an approach.5 The next year, NSF created the GENI program, with the goal of creating a shared, programmable national infrastructure for researchers to experiment with alternative Internet architectures at scale. GENI funded early OpenFlow deployments on college campuses, sliced by FlowVisor35 to allow multiple experimental networks to run alongside each other on the same production network, each managed by their own experimental controller. This, in turn, led to a proliferation of new open source controllers (Beacon, POX, and Floodlight). GENI also led to a programmable virtualized backbone network platform,6 and an experimental OpenFlow backbone network in Internet2 connecting multiple universities. This led to OpenFlow-enabled switches from Cisco, HP, and NEC. GENI funded the purchase of OpenFlow whitebox switches from ODM manufacturers and the open source software to manage them. NSF funded the NetFPGA project, which enabled experimental OpenFlow switches in Internet2. NSF brought together a community of researchers driven by much more than the desire to create experimental test beds; many researchers came to realize that programmability and virtualization were, in fact, key capabilities needed for future networks.5,16
Future Internet Design (FIND): In 2007, NSF started the FIND program to support new Internet architectures that could be prototyped and evaluated on the GENI test bed. The FIND program and its successor Future Internet Architecture (FIA) in 2010 expanded the community, working on clean-slate network architectures and fostering alternative designs. The resulting ideas were bold and exciting, including better support for mobility, content delivery, user privacy, secure cloud computing, and more. NSFâs FIND and FIA programs fostered many clean-slate network designs with prototypes and real-world evaluation, many leveraged SDN and improved its foundations. As momentum for clean-slate networking research grew in the U.S., the rest of the world followed suit, such as the EU Future Internet Research and Experimentation (FIRE) program.
Programmable Open Mobile Internet (POMI) Expedition: In 2008, the NSF POMI Expedition at Stanford expanded funding for SDN, including its use in mobile networks. POMI funded the early development of ONOS, an open source distributed controller,8 and the widely used Mininet network emulator for teaching SDN and for testing ideas before deploying them in real networks. POMI also funded the first explorations of programmable forwarding planes, setting the stage for the first fully programmable switch chip10 and the widely used P4 language.9
SDN adoption by cloud hyperscalers. In parallel with the early academic research on SDN, large technology companies such as Microsoft, Google, Amazon, and Facebook began building large datacenters full of servers that hosted these companiesâ popular Internet services and, increasingly, the services of enterprise customers. Datacenter owners grew frustrated with the cost and complexity of the commercially available networking equipment; a typical datacenter switch cost more than $20,000 and a hyperscaler needed about 10,000 switches per site. They decided they could build their own switch box for about $2,000 using off-the-shelf switching chips from companies such as Broadcom and Marvell, and then use their own armies of software developers to create optimized, tailored software using modern software practices. Reducing cost was good, but it was control they wanted and SDN gave them a quick path to get it.
The hyperscalers used SDN to realize two especially important use cases. First, within a single datacenter, cloud providers wanted to virtualize their networks to provide a separate virtual network for each enterprise customer (or âtenantâ) with its own IP address space and networking policies. The start-up company Nicira, which emerged from the NSF-funded Ethane project, developed the Network Virtualization Platform (NVP)26 to meet this need. Nicira was later acquired by VMware and NVP became NSX. Nicira also created Open vSwitch (OVS),33 an open source virtual switch for Linux, with an OpenFlow interface. OVS grew rapidly and became the key to enabling network virtualization in datacenters around the world. Second, the hyperscalers wanted to control traffic flows across their new private wide-area networks and between their datacenters. Google adopted SDN to control how traffic is routed in its B4 backbone,23,39 using OpenFlow switches, controlled by ONIX, the first distributed controller platform.27 When Google first described B4 at the Open Network Summit in 2012, it sparked a global surge in research and commercialization of SDN. There were so many papers at ACM SIGCOMM that a separate conferenceâHot Topics in Software-Defined Networking (HotSDN, later SOSR) was formed.
These two high-profile use casesâmulti-tenant virtualization and wide-area traffic engineeringâdrew significant commercial attention to SDN. Indeed, NSF-funded research led directly to the creation of several successful SDN start-up companies, including Big Switch Networks (open source SDN controllers and management applications, acquired by Arista), Forward Networks (network verification products), Veriflow (developed network verification products, acquired by VMware), and Barefoot Networks (programmable switches, acquired by Intel), to name a few. SDN influenced the large networking vendors, with Cisco, Juniper, Arista, HP, and NEC all creating SDN products. Today, AMD, Nvidia, Intel, and Cisco all sell P4-programmable products, and in 2019 about a third of papers appearing at ACM SIGCOMM were based on P4 or programmable forwarding.
The commercial success of SDN drove further interest among academic researchers. The NSF and other government agencies, especially the Defense Advanced Research Project Agency (DARPA), sponsored further research on SDN platforms and use cases that continues to this day. The SDN research community broadened significantly, well beyond computer networking, to include researchers in the neighboring disciplines of programming languages, formal verification, distributed systems, algorithms, security and privacy, and more, all helping lay stronger foundations for future networks.
This article summarizes the story of how SDN arose. So many research projects, papers, companies, and products arose because of SDN that it is impossible to include all of them here. The foresight of NSF in the early 2000s, funding a generation of researchers at just the right time, working closely with the rapidly growing hyperscalers, led quite literally to a transformationâa revolutionâin how networks are built today.
SDN Grew First and Fastest in Datacenters
The first large-scale deployments of SDN took place in hyperscale data centers, beginning about 2010. The story is best told by the hyperscaler companies themselves, and so we asked leaders at Google, Microsoft Azure, and Meta to tell their stories about why and how they adopted SDN. As you will see, they all started from the ideas and principles that came from the NSF-funded research; and each tailored SDN to suit their specific needs and culture.
The Internet Service Providers (ISPs) and telecommunication companies also had a strong interest in SDN. AT&T played a large role in its definition, engaging in research and early deployments in the mid 2000s. We invited Albert Greenberg, who was at AT&T at the time, to tell the story.
Nicira was perhaps the startup that epitomized the SDN movement. It grew out of the NSF-funded 100Ă100 program and the Clean Slate Program at Stanford, based on the Ph.D. work of MartĂn Casado. Nicira developed ONIX, the first distributed control plane, used by Google in its infrastructure; OVS, the first OpenFlow-compliant software switch; and NVP (later NSX), the first network virtualization platform. We invited Teemu Koponen, a principal architect at Nicira, to tell the story.
During the early 2010s, the networking industry began to realize that SDN has many big advantages. It lifts complex protocols up and out of the switches into the control plane, where it is written in a modern programming language. This made it possible to reason about the correctness of the protocols simply by examining the software controlling the network and the forwarding state maintained by the switches. For the first time, it became possible to formally verify the behavior of a complete network.
Researchers, startups, network equipment vendors, and hyperscalers have all taken advantage of SDN principles to develop new ways to verify network behavior. We invited Professor George Varghese, who has been deeply involved in network verification research, to give us his perspective on network verification.
A main benefit of SDN is that it hands over the keys (of control) from the networking equipment vendorsâwho kept their systems closed and proprietary, and hence tended to evolve slowlyâto software programmers, who could define the behavior for themselves, often in open source software. And indeed it happened: Today, most large networks are controlled by software written by those who own and operate networks rather than by networking equipment vendors.
But what about the hardware? Switches, routers, firewalls, and network interface cards are all built from special-purpose ASICsâhighly integrated, cost-effective, and super-fast. The problem was the features and protocols that operated on packets (for example, forwarding, routing, firewalls, and security) were all baked into hardware at the time the chip was designed, two to three years before it was deployed. What if the network owner and operator needed to change and evolve the behavior in their network, for example to add a new way to measure traffic or a new way to verify behavior? A group of researchers and entrepreneurs set out to make the switches and NICs programmable by the user, to allow more rapid improvement and give the operator greater control. Not only did new programmable devices emerge, but a whole open source movement around the P4 programming language.
We invited Professor Nate Foster, who leads the P4 language ecosystem, to tell the story of how programmable forwarding planes came about.
So far, we have focused on SDN wireline networks running over electrical and optical cables in datacenters, enterprises, and long-haul WANs. SDN was originally defined with wireline networks in mind.
Yet, for cellular networks, the most widely used networks in the world, the need was even greater: Cellular networks have been held back for decades by closed, proprietary, and complex âstandardsâ designed to allow equipment vendors to maintain a strong grip on the market. SDN provides an opportunity to open up networks, introducing well-defined control APIs and interfaces, moving control software to common operating systems running on commodity servers.
This story has only just begun, but it started thanks to NSF-funded research in the mid 2000s, then boosted by DARPA-funded programs to support open source software for cellular infrastructure. We invited Guru Parulkar and OÄuz Sunay to tell the story, both of whom developed open source cellular systems at the Open Networking Foundation and for the DARPA-funded Pronto project.
Conclusion
The investments NSF made in SDN over the past two decades have paid huge dividends. SDN transformed how companies run their datacenter, enterprise, cellular, and backbone networks, and created a pathway for creative new ideas to see widespread deployment. The biggest beneficiaries are the billions of people who have a much more reliable, more secure, lower-cost, and faster Internet for the services they use every day.
NSF invested in the foundations of SDN at a very early stage, back when it seemed unthinkable that network ownersârather than a few incumbent equipment vendorsâcould decide how networks behave. NSF nurtured the growing interest in SDN over many years, fostering a vibrant research community, critical software building blocks, and key early start-up companies that made SDN technologies available in practice. The Internet, and indeed computing and communication technologies in general, need the kind of bold, ongoing innovation that NSF makes possible.
Submit an Article to CACM
CACM welcomes unsolicited submissions on topics of relevance and value to the computing community.
You Just Read
How the U.S. National Science Foundation Enabled Software-Defined Networking
View in the ACM Digital Library
Š 2025 Copyright held by the owner/author(s).