My last post started a conversation about how we might approach a 7th Edition of Computer Networks. As was suggested during the Q&A session at Bruce’s SIGCOMM Keynote, it feels like it might be the right time to substantially rethink how we teach networking. My initial take was to reaffirm a principle underlying the last six editions; specifically:
Because networks are both complex and constantly changing, what you need to teach students is how to reason about system design, as opposed to being overly fixated on any particular set of layers or technology choices.
I don’t think many people would argue with that position, but once you start being more specific, the …
My last post started a conversation about how we might approach a 7th Edition of Computer Networks. As was suggested during the Q&A session at Bruce’s SIGCOMM Keynote, it feels like it might be the right time to substantially rethink how we teach networking. My initial take was to reaffirm a principle underlying the last six editions; specifically:
Because networks are both complex and constantly changing, what you need to teach students is how to reason about system design, as opposed to being overly fixated on any particular set of layers or technology choices.
I don’t think many people would argue with that position, but once you start being more specific, the right approach becomes a bit murkier. I know first hand because I’ve spent the weeks since writing that post trying to spell out a more concrete outline. One problem you trip over is how to balance abstract concepts with concrete artifacts (the latter including open standards, open source software, and generic descriptions of commercial products).
While I’m sympathetic to a concept-centric approach—after all, we take considerable pride in refusing to apply strict layering in our books—including a carefully selected set of artifacts is critically important to students learning the material. This is in part a reformulation of the age-old discussion about how people learn best: from general to specific, or from specific to general. But there’s something else going on here, which I will try to articulate in this post, as a step to getting the outline right.
For starters, the details that an artifact implies are important, even if those details might change. It’s not necessary to understand every artifact you encounter in its full glory, but seeing the details of a few selected examples helps you digest the next artifact you encounter. It’s also essential to understand the general concepts in depth, rather than just superficially.
Beyond that, we need to acknowledge that some artifacts are more important than others. They represent years of experience trying different alternatives, with a consensus forming around a particular way of doing things. We talk about the process of making design decisions at every opportunity, so it makes sense to highlight these “best practice” examples when we encounter them. And certain artifacts play an oversized role in shaping the course of how the network evolves. Other problems are solved under the assumption that that particular artifact defines a fixed point.
Central Artifacts
IP has played that outsized role throughout much of the Internet’s history. It didn’t pretend to be the only viable networking technology, but instead positioned itself as a logical network that can be overlaid on top of any technology—those known today, or yet to be invented. That approach worked so well that for all practical purposes, “IP Internet” is now synonymous with “global internet”, with IP providing a universal, best-effort packet delivery service that makes it possible to communicate with every connected device in the world. The consensus around that architectural idea should play a significant role in how you organize a networking textbook (or course): IP defines the “boundary” between what runs inside the network and what runs at the edge of the network, and has remained the stable “narrow waist” of the Internet hourglass for decades.
In our view, two other artifacts play a similar role today: (1) Ethernet switches, and (2) HTTP. My earlier post talked about Ethernet switches as the departure point for discussing “inside the network” topics. Unlike the mid-1990s, when everyone was pitching a novel link-layer protocol and we were still trying to figure out how to build fast switches, Ethernet link layer and Ethernet packet switches are now the default for wired networks. This isn’t to say there are no other technologies (SONET is common in large ISPs), but building a packet switch network is a tractable problem when you start with a well-seasoned building block like an Ethernet switch. As an aside, Bruce and I have gone round-and-round on exactly what to call this particular artifact—L2/L3 switches, IP switches, and routers are other options—but that just points to the wide range of configuration options that are available for different deployment scenarios (e.g., enterprises, datacenters, service provider networks). This configurability is part of what makes an Ethernet switch such an effective building block, which is itself worth highlighting.
Similarly, how we build network applications—and the software stacks that enable them—owes a great deal to HTTP (and the World Wide Web) as the cornerstone. These applications are implemented in software running on general-purpose computers, but there are different ways to modularize this functionality. Over the history of the Internet, this software has been refactored multiple times. This is why I believe there’s an argument for making HTTP (and not, say, TCP) the anchor artifact for the edge.
One reason for this is obvious. Of all the applications that run on the Internet, the World Wide Web—and the HTTP standard that defines it—is the dominant running application on today’s Internet. A second reason is less obvious but more profound: the web has changed how we build applications; it is more a framework than an application. Two other example applications—Email and Adaptive Streaming—help illustrate this point. Email came before the web, and so illustrates the “old” (pre-HTTP) way to build applications. Adaptive Streaming (think Netflix) came after HTTP, and so illustrates the “new” (post-HTTP) way to build applications. This sort of framing also makes it natural to talk about RESTful interfaces and the role the cloud plays in today’s application ecosystem. Both are important if you are going to do justice to the topic of building applications, and appreciating what the rest of the software stack needs to do.
But what about TCP? It’s certainly an important artifact that deserves attention; it is after all a useful module that helps HTTP do its job, and is usually given as the canonical example of a transport protocol. But increasingly, TCP is not the only option. My view is that QUIC (and other Request/Response protocols) deserves equal billing. If you’ve followed our newsletter over the last few years, you will recognize this as a familiar theme, but beyond grinding that axe (again), the bigger point is that it is important to help students build intuition about software ecosystems, and how pliable and ever-changing they are. The world has definitely shifted from focusing on “network apps” to thinking in terms of “cloud apps”, and I would argue it’s important for a networking course to embrace that change.
There is no need for us to pile on regarding the outages at AWS and Azure in the last week, but we’ll refer you to a fun piece from Corey Quinn that cuts through a lot of noise. We will also continue to bang the drum for decentralization and verification.
We’ve written previously on how we enjoy John McPhee’s writing about the writing process, and recently we came across a good article on “The McPhee Method” that we might try to apply ourselves as we work on the new book.
We’re asymptoting towards publication of our Network Security book, so please submit your bug reports, typos, etc. before we commit to printing it.