
You’d think that by now, networks were well enough understood that people would stop making assumptions that we have known, almost since the dawn of networking, to be untrue. Yet as users, developers, and network administrators, we still seem curiously unable to let go of long-held beliefs.
Perhaps the best-known collection of mistaken ideas about networks is the eight fallacies of distributed computing.
The eight fallacies
- The network is reliable
- [Latency](ht…

You’d think that by now, networks were well enough understood that people would stop making assumptions that we have known, almost since the dawn of networking, to be untrue. Yet as users, developers, and network administrators, we still seem curiously unable to let go of long-held beliefs.
Perhaps the best-known collection of mistaken ideas about networks is the eight fallacies of distributed computing.
The eight fallacies
- The network is reliable
- Latency is zero
- Bandwidth is infinite
- The network is secure
- Topology doesn’t change
- There is one administrator
- Transport cost is zero
- The network is homogeneous
Where did this list come from?
The list began with four original fallacies (the first four in the list), collected by Bill Joy and Tom Lyon, two of the original eight founders and employees of Sun Microsystems.
Sun integrated high-speed graphics, the UNIX operating system, and a working Internet protocol stack, which led to the explosion in desktop computing, their meteoric rise, and ultimate acquisition by Oracle Computing. When you use a Berkeley Software Distribution (BSD) variant, a Linux distribution, or even Android, you’re using technology that has followed a lineage from Sun Microsystems. Think of the ZFS file system, the Network File System (NFS) protocol for network file storage, and Java, to name a few.
The list was later expanded by L. Peter Deutsch, who added a further three fallacies while at Sun. The final fallacy was coined by James Gosling, who, fittingly, also worked at Sun, bringing us to the eight fallacies we now know and love.
Over time, these ideas have settled and inspired other lists of fallacies — for example, the fallacies surrounding dates and times, or the falsehoods people believe about names. I would be surprised if there weren’t more. We encounter these date and name errors constantly, whether filling in forms or interacting with assets on the web or in apps.
The underlying eight fallacies of distributed computing are buried ‘constants’ in our use of the network. They are worth thinking about as network operators, whether in protocol and software design, or how they impact users in daily life. By keeping them in mind, we can better address the behaviours that arise from these fallacies as we encounter them online.
The list is aimed at people writing network software: Applications that call into the network, services that are called from the network, and network protocols. It provides practical guidance — even if presented abstractly — on how to think about sending data through a network, and the questions you should ask. Questions such as:
- Was it actually sent?
- Was it received?
- How can you tell?
- Can you send it again, or is the data gone?
- Does it even need to be sent again?
- Do you have time to handle this data? How will it affect the rest of your program?
- Does the network behave in ways you really understand, despite its complexity??
Looking at the fallacies case by case
What follows is my personal understanding of the meaning of each of the eight fallacies of distributed computing, as they relate to how the network behaves towards me and my services. Others have different views, and I may have gotten some things wrong.
Figure 1 — The eight fallacies of distributed computing.
1. The network is reliable
Measured as a whole, the Internet is probably broken somewhere, for some users, at all times. That we individually experience it as reliably available is a triumph of hope over experience. Claims of ‘five nines’ reliability often lead us to act as if ‘it won’t happen to me’.
More specifically, people tend to assume that once a packet is sent, it will be received. Most of the time, it is. But we still have to design protocols to handle the cases when it isn’t.
Consider the first of the three classic measures of a network’s behaviour: loss, delay, and jitter. ‘Loss’ is simply another way of saying ‘unreliable’. If your protocol doesn’t account for the fact that data can be lost, it will run into problems. Much of the Transport Control Protocol (TCP) and QUIC layers are specifically designed to recognise packet loss and handle it.
Internet Protocol (IP) — whether version four or six — does not guarantee delivery. That responsibility falls to higher layers if they are capable.
2. Latency is zero
Latency encompasses the other two network issues mentioned above: Delay and jitter. Delay is sometimes simply a function of distance, given the speed of light — but even this can be misunderstood, since the speed of light in fibre is slower than in a vacuum.
Additional delays occur when converting a signal from copper to fibre and sending it along a fibre optic link. Because of this, sending data via microwave, radio, or even laser between satellites can sometimes be faster than sending it through fibre.
Jitter, the variability of delay, is a major challenge for gaming and streaming protocols. Latency and loss are the reasons why services like Netflix both buffer data and use error-correcting codes. These techniques compensate for fluctuations in delay, providing a smooth and reliable playback experience.
3. Bandwidth is infinite
It’s tempting to think that in the modern Internet, we can sometimes treat bandwidth as effectively infinite for most practical purposes. The reality, however, is that many links in the system have more people sending packets than there are spaces available to carry them.
Dealing with the consequences of ‘less-than-infinite’ bandwidth introduces queuing, which in turn creates delay. With delay comes jitter, and under extreme conditions, packet loss. The limitations of finite bandwidth directly affect every network flow subject to these constraints.
In today’s network, where data is often ‘close by’ in a Content Delivery Network (CDN), we rarely notice this. In bandwidth terms, the limits of the network are often far removed from us — with one exception: our local home link.
We may use gigabit-capable devices, but our local link speed is often only a few hundred megabits. Can we exceed our home router’s capacity? Almost trivially. Can we exceed our home Wi-Fi network? Certainly. A modern mobile phone can sustain 400 Mbit/sec or more, but a Wi-Fi network purchased five years ago might cap out at 100 Mbit/sec.
Investing in network bandwidth to match expectations is like building roads to handle peak-hour traffic: you can make it appear that congestion doesn’t exist, but the cost may be higher than desired. When upgrading a home router to match the speed of new edge fibre delivery, we face the same question: how much bandwidth do we really need?
4. The network is secure
In the days of monopoly telecommunications, when a single provider ran networks across an entire economy, there was one major risk: that the provider might fail to ensure the privacy of our data. Typically, they controlled all the infrastructure, and intrusions were rare, if not unheard of. Encoding overhead was minimal, and law enforcement could access data through a single channel.
Today, networks run across multiple providers and through intermediaries with whom we have no relationship. Continuing to believe that nobody is ‘seeing’ our packets is naïve. What we can do is ensure the packets themselves contain only secret, protected data. Designing protocols to provide this protection — now and into the future — is both costly and time-consuming. With emerging quantum computing threats to public–private key cryptography, even this protection may not be as guaranteed as we would like.
Even protected packets, however, reveal information. Traffic analysis can expose patterns, and advanced machine learning can distinguish streaming, file storage, and interactive traffic from packet timing and size alone. Never treat the network as inherently secure, and never rely on it below your HTTPS or Transport Layer Security (TLS) connections to hide you from others.
5. Topology doesn’t change
Topology changes can come from many sources. They occur when your mobile phone connects to a different tower, or when the phone of the person you are communicating with does. They also occur when your provider optimises traffic for efficiency or profit by routing packets differently than you expect. We experience these topology changes as loss, delay, and jitter.
Transport protocols like QUIC and TCP shield us from the effects of changes ‘in the middle’ of the network and the resulting impact on packet paths from source to destination. However, the processes that manage these changes — such as Virtual Router Redundancy Protocol (VRRP), Common Address Redundancy Protocol (CARP), Border Gateway Protocol (BGP), or Multipath TCP — are not free. These overheads mean that assumptions such as no loss, no delay, or no duplication of data cannot be relied upon.
6. There is one administrator
Sometimes, it feels as if there isn’t even a single administrator. Other times, it seems like having just one would be better than the many we encounter. Even within a single Network Operations Centre (NOC), multiple hands, models, and processes can be at work. Modern networks are so complex that the administrator you speak to is very probably not the one actually making changes in the system.
7. Transport cost is zero
Cost is multidimensional. Take the Short Message Service (SMS) protocol as an example. If you multiplied the cost of sending a few packets via SMS by the number of packets required to send a movie, the total would run into thousands of dollars. But does that reflect the actual cost?
Where does cost come from in a network? Is it the electricity needed to transmit packets? The hardware? The support systems, business logic, accounting, and risk management? All of these contribute real-world costs to make data transport possible. Just because cost is not exposed directly in a protocol does not mean it doesn’t exist — it means that the cost is being absorbed elsewhere in society.
Some costs are never recovered directly and simply become part of the aggregate ‘cost of doing business’. Others, such as the asymmetric charges for sending and retrieving data from Amazon S3 long-term storage, are deliberately structured to encourage fetching data only when necessary.
8. The network is homogeneous
One of the great fallacies of BGP routing is the idea that ‘cost’ is the same as AS path length. If you ignore all other factors, you might prefer the route with the fewest AS hops. But is that really wise?
Consider connecting European nodes to Asian nodes over a slow, expensive, low-capacity link, while exposing all your European peers to your Asian peers. That thin, costly link will quickly become congested. The inconsistencies in delay, bandwidth, and load capacity become apparent almost immediately.
Even in simple home networks, differences between devices on Wi-Fi and devices on Ethernet can be stark. A TV streaming over Wi-Fi competes for airtime with other devices on the same channel, a problem that doesn’t exist when using an Ethernet switch.
The costs of delay and retransmission are largely hidden by oversupply, buffering, and encoding. Yet, if you examine network behaviour closely, the differences are clear. IP) masks many of the nuances between local and remote, slow and fast, or reliable and unreliable connections. Higher-layer protocols, however, must handle these realities — balancing time, buffer usage, and computational costs — to deliver the best service possible under the circumstances.
Fallacies we believe about fallacy lists
The network wouldn’t be what we know and love if the list of network fallacies didn’t contain a fallacy itself. Several online discussions of these fallacies mistakenly refer to Tom Lyon as Dave Lyon. It seems that, over time, even facts can’t be counted on to remain fixed.
Maybe one day this list of eight fallacies will grow to nine or ten. I think it’s unlikely to drop down to seven.
| Rate this article |
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.