Series: Computer Networking from Absolute Basics — Part 3
In the earlier parts of this series, we laid the foundation of computer networking by exploring what the Internet is, how end systems communicate, and how access networks such as DSL, cable, FTTH, and mobile networks connect our devices to the larger network. While access networks explain how we get connected, an important question still remains once data enters the network:
How is that da...
Series: Computer Networking from Absolute Basics — Part 3
In the earlier parts of this series, we laid the foundation of computer networking by exploring what the Internet is, how end systems communicate, and how access networks such as DSL, cable, FTTH, and mobile networks connect our devices to the larger network. While access networks explain how we get connected, an important question still remains once data enters the network:
How is that data actually carried from one device to another?
Networks need a systematic way to move information across links, routers, and switches. Over time, two fundamental techniques have been developed for this purpose:
- Circuit Switching
- Packet Switching
Although both aim to transfer data from a source to a destination, they operate in very different ways. Circuit switching was the backbone of traditional telephone networks, while packet switching is the core technology behind the modern Internet.
In this part of the series, we’ll start from absolute basics and explore how both techniques work, why packet switching became dominant, and what trade-offs exist between the two approaches.
Packet Switching

When we use the internet, whether to send a message, stream a video, or open a website, data does not travel as one large continuous block. Instead, it follows a method known as packet switching, which forms the backbone of modern computer networks.
Packet switching is connectionless, meaning no dedicated path is established before transmission begins. Packets are sent independently without reserving network resources, and packets belonging to the same message may take different routes depending on current traffic conditions, often arriving out of order.
To handle this, packets carry sequence numbers, which allow the destination system to reassemble the packets in the correct order before delivering the data to the application. This flexibility makes packet switching efficient and resilient, especially in large and dynamic networks like the Internet.
Let’s break this concept down step by step.
Messages and End Systems
In a network application, end systems such as laptops, smartphones, and servers exchange messages designed by application developers. These messages may contain user data or control information. However, sending an entire message as a single large unit would be inefficient and unreliable, which is why packet switching is used.
Breaking Messages into Packets
Instead of sending a complete message at once, the source system divides it into smaller chunks called packets. Each packet travels independently through the network toward the destination.
Between the source and destination, packets move across:
- Communication links, and
- Packet switches, primarily routers and link-layer switches.
At the destination, packets are reassembled to reconstruct the original message.
Transmission Rate and Packet Delay
Every communication link has a transmission rate, usually denoted as R bits per second. If a packet contains L bits, then the time required to push that packet onto the link is:
Transmission Delay = L/R
This delay represents the time needed to push all the bits of a packet onto the link, not the time taken to travel through the cable, which is known as propagation delay and depends on distance and signal speed.
As I was reading about this, one question stood out.
When you hit “send” on a message, do all the bits of your packet instantly appear at the router? If not, how long does it really take for the first and last bits to arrive, and what roles do transmission delay and propagation delay play in this journey?
Transmission delay is the time needed to push all the bits of a packet onto the link, calculated as:
Transmission delay = L/R
Propagation delay is the time it takes for the bits to physically travel across the link, calculated as:
Propagation delay = d/s, where d = distance and s = signal speed
Timeline for a packet from source to router:
- 0 sec: The first bit leaves the source immediately.
- After d/s seconds: The first bit reaches the router (propagation delay).
- After L/R seconds: The last bit leaves the source (transmission delay).
- Total time for last bit to arrive:
Total time = Transmission delay + Propagation delay = L/R + d/s
But in this article we will ignore the propagation delay
Store-and-Forward Transmission
Most packet switches use a technique called store-and-forward transmission. This means a router must:
- Receive the entire packet
- Store it in memory
- Then forward it onto the next link
The router cannot transmit even the first bit of a packet until it has received the last bit.
Sending Multiple Packets
Now suppose the source wants to send three packets, each of size L bits, to the destination.
Because of pipelining, the source and router can work at the same time:
- While the router is forwarding the first packet, the source can start sending the second packet.
- This overlapping of sending and forwarding makes the process more efficient.
Here’s what happens over time (ignoring propagation delay for simplicity):
- Time 0 → L/R:
- Source transmits packet 1 onto the first link.
- Router receives packet 1 and starts forwarding it immediately after receiving the entire packet.
2. Time L/R → 2L/R:
- Router is forwarding packet 1 to the destination.
- Source starts transmitting packet 2 onto the link.
3. Time 2L/R → 3L/R:
- Destination receives packet 1.
- Router receives packet 2 and begins forwarding it.
- Source starts transmitting packet 3.
4. Time 3L/R → 4L/R:
- Destination receives packet 2.
- Router forwards packet 3.
5. Time 4L/R:
- Destination receives packet 3.
Generalising the Delay
For a path consisting of: n links n − 1 routers
The end-to-end delay for one packet is:
d = N(L/R)
Queuing Delays and Packet Loss
So far, we’ve seen how packets are transmitted and forwarded through routers. But in real networks, things aren’t always perfectly smooth. Each router has multiple links, and for every link, it keeps a small memory space called an output buffer (or output queue). This buffer temporarily holds packets that are waiting to be sent on that link.
Here’s the catch:
- If a packet arrives at a router but the outgoing link is already busy transmitting another packet, the new packet must wait in the buffer.
- The time a packet spends waiting is called queuing delay.
- Queuing delay is variable, depending on how busy the network is at that moment.
And if the buffer becomes completely full? Some packets might have to be dropped, resulting in packet loss.
Forwarding Tables and Routing Protocols
Once a packet reaches a router, the router needs to figure out where to send it next. This is where forwarding tables and routing protocols come in.
Here’s how it works in the Internet:
- Every device has a unique IP address.
- When a source wants to send a packet, it includes the destination IP address in the packet.
- When the packet arrives at a router:
- The router checks the destination address
- Looks up its forwarding table to find the appropriate outgoing link
- Forwards the packet onto that link toward the next router
Forwarding tables are built and updated using routing protocols, which help routers learn the network layout and select the most efficient paths for packets.
And that wraps up our deep dive into packet switching, from how packets are transmitted, pipelined, and queued, to how routers use forwarding tables and routing protocols to deliver them efficiently.
With this understanding, let’s turn our attention to the other major technique: circuit switching, the approach that powered traditional telephone networks and works very differently from packet switching.
Circuit Switching

Circuit switching is a communication technique in which a dedicated, end-to-end communication path is established before any data is transmitted. Once this path is set up, all data flows through the same fixed route for the entire duration of the communication.
This approach was famously used in traditional telephone networks, where a physical or logical circuit was reserved for a call from start to finish.
Before any information is sent:
- The network first sets up a connection
- Resources along the path are reserved
- Only after this setup is complete does data transmission begin
Once the circuit is established:
- Data arrives in order
- Bandwidth is guaranteed
- No other communication can use that circuit until it is released
Now that we understand the basic idea behind circuit switching, let’s look at how it actually works in more detail.
Resource Reservation in Circuit Switching
In a circuit-switched network, the resources required for communication such as link bandwidth and switching capacity are reserved in advance for the entire duration of the communication session. Once this reservation is made, those resources are exclusively used by the two communicating end systems until the session ends.
But what exactly is a circuit?
In networking terms, a circuit is a dedicated logical path established between the sender and the receiver. This path may pass through multiple switches and links, but from the moment it is set up, all data follows the same fixed route, using pre-allocated resources at each step.
Guaranteed Transmission Rate
When a network establishes a circuit, it doesn’t just choose a path it also reserves a constant transmission rate along that path. In other words, a fixed portion of each link’s total capacity is set aside exclusively for the connection.
Because this transmission rate is guaranteed:
- The sender can transmit data at a steady, predictable speed
- Data arrives in order
- Delays remain consistent and bounded
This is especially useful for applications like traditional voice calls, where a continuous and reliable flow of data is more important than maximizing overall network efficiency.
Multiplexing in Circuit-Switched Networks
By now, we know one key thing about circuit switching: once a connection is set up, the network reserves resources for it until the communication ends.
But here’s the obvious question:
If one physical link exists, how can multiple calls or connections use it at the same time?
The answer is multiplexing.
Multiplexing is simply the technique that allows many circuits to share one physical link, without interfering with each other.
In circuit-switched networks, this is done in two main ways:
- by dividing frequency (FDM)
- or by dividing time (TDM)
Frequency-Division Multiplexing (FDM)
In FDM, the link is shared by splitting its frequency range into smaller pieces.
Think of it like this:
- The link has a large frequency spectrum
- Each connection is given its own frequency band
- That band stays exclusively reserved for that connection until it ends
So even if the user is silent, their frequency band:
- is still reserved
- cannot be used by anyone else
A very intuitive example is FM radio:
- The radio spectrum is shared
- Each station broadcasts on a specific frequency
- Stations don’t take turns, they transmit at the same time, but on different frequencies
That’s exactly how FDM works in circuit switching.
Time-Division Multiplexing (TDM)
In TDM, instead of splitting frequency, the network splits time.
Here’s the intuition:
- Time is divided into repeating cycles called frames
- Each frame is divided into small time slots
- When a circuit is created, one specific time slot in every frame is reserved for it
So the connection:
- gets to send data at regular intervals
- always in the same slot
- and no other connection can use that slot
Even if the user sends nothing for a moment, their slot:
- still appears in every frame
- still goes unused
- but remains reserved
Because these frames repeat extremely fast, communication feels continuous.
So far, we’ve discussed circuit switching conceptually. But what does this actually mean in practice?
Let’s walk through a small numerical example to see how data flows in a circuit-switched network and why, once a circuit is established, the transmission time does not depend on how many links the data crosses.
A Numerical Example
Let’s now consider a concrete example.
Given
Suppose:
- A file of 900,000 bits needs to be sent from Host A to Host B
- The network uses Time-Division Multiplexing (TDM)
- Each link supports 30 time slots
- The total link transmission rate is 3 Mbps
- It takes 0.4 seconds to establish an end-to-end circuit
Step 1: Transmission Rate per Circuit
In a TDM-based circuit-switched network:
- The total link capacity is shared equally among all time slots
- Each circuit gets one fixed time slot in every frame
So the transmission rate available to one circuit is: (3Mbps/30) = 0.1 Mbps=100 kbps
This rate is reserved exclusively for the entire duration of the connection.
Step 2: Time to Transmit the File
Now that the circuit exists, data flows at a constant rate of 100 kbps.
The time required to transmit the file is: (900,000 bits) / (100,000 bits/sec) = 9 seconds
Step 3: Add Circuit Setup Time
Before any data is sent, the network must establish the circuit. So the total time becomes: 9 seconds+0.4 seconds=9.4 seconds
Packet Switching vs Circuit Switching: The Key Difference
Circuit switching and packet switching solve the same problem in very different ways.
Circuit switching reserves bandwidth in advance, giving predictable delays and guaranteed performance. This works well for traditional voice calls, but it wastes resources when users are idle.
Packet switching does not reserve resources. Instead, packets share the network dynamically. Although this can lead to variable delays, it uses network capacity far more efficiently by taking advantage of the fact that users send data in bursts rather than continuously.
This efficiency and scalability are the main reasons packet switching became the foundation of the Internet.
Conclusion
In this part of the series, we explored how data moves through a network using packet switching and circuit switching, and why the Internet ultimately adopted packet switching.
In Part 4, we’ll zoom out and look at the Internet as a network of networks, and see how different networks connect to form the global Internet.
How Data Travels: Packet Switching vs Circuit Switching was originally published in InfoSec Write-ups on Medium, where people are continuing the conversation by highlighting and responding to this story.