When evaluating distributed databases like CockroachDB, customers often ask for what seems reasonable: fast transactions, regional survivability, and strong consistency. It sounds simple - low latency, resilience, and consistency in one database - but physics says otherwise. Here’s why you can’t have all three.
The Core Problem: Distance, Quorums, and Consistency
The challenge lies at the intersection of three unbreakable constraints: the speed of light, quorum-based replication, and strong consistency. To see why no system can optimize for all three, we first need to understand each on its own.
The Speed of Light Constraint
Lig…
When evaluating distributed databases like CockroachDB, customers often ask for what seems reasonable: fast transactions, regional survivability, and strong consistency. It sounds simple - low latency, resilience, and consistency in one database - but physics says otherwise. Here’s why you can’t have all three.
The Core Problem: Distance, Quorums, and Consistency
The challenge lies at the intersection of three unbreakable constraints: the speed of light, quorum-based replication, and strong consistency. To see why no system can optimize for all three, we first need to understand each on its own.
The Speed of Light Constraint
Light travels at approximately 299,792 km/s in a vacuum. In fiber optic cables, it travels at roughly two-thirds of that speed, or about 200,000 km/s. This physical limit creates an unavoidable latency floor for any communication between geographically distant locations.
Consider the distance between major cloud regions. The distance from Northern Virginia to Oregon is approximately 3,700 km. At the speed of light in fiber, a signal takes at least 18.5 ms to traverse this distance one way. A round-trip communication requires a minimum of 37 ms, and this assumes a perfectly direct fiber path with no routing overhead, switching delays, or other real-world impediments.
In practice, cross-country latencies typically range from 60 to 80 ms round-trip. Cross-ocean connections are even worse. For example, New York to London involves roughly 5,500 km, resulting in minimum theoretical latencies of 55 ms round-trip, with practical latencies often exceeding 80 ms. These numbers are not the result of inefficient software or poor network configuration. They are the consequence of fundamental physics.
Quorum Replication Requirements
Distributed databases that provide fault tolerance typically use quorum-based replication. Quorum-based replication ensures data consistency and durability across multiple replicas. Instead of requiring every replica to acknowledge a write, the system only needs a majority (a quorum) to confirm it, ensuring that data remains available and consistent even if some replicas fail. For a three-replica system, a quorum requires two replicas. For five replicas, a quorum requires three.
Why use quorums? The answer lies in availability and durability. Unlike legacy replication methods that often rely on updating a single primary replica or waiting for all replicas to catch up — making them vulnerable to data loss or downtime if that replica fails — quorum-based replication ensures durability and availability by requiring a majority of replicas to acknowledge a write, thereby enabling the system to tolerate individual node failures while maintaining consistent and reliable operations.
Where replicas are placed is impacted by your needs for availability. If your database must be able to survive the loss of any geographical region, then replicas must be placed in 3 (or more) physical regions. To commit a write operation, the database must receive acknowledgments from a majority of replicas, so if you have three replicas in different regions, you must wait for at least two of them to acknowledge the write. That need for a remote response introduces cross-region latency into every write operation of a multi-region cluster.
The fundamental rule is simple: the latency of a write operation is determined by the latency to the closest other replica in the quorum. If your quorum includes replicas separated by thousands of kilometers, your write latency will be dominated by the time required for light to travel those distances.
Strong Consistency and Linearizability
Strong consistency, also called linearizability, provides a powerful guarantee: the system behaves as if there is only a single copy of the data, and all operations appear to occur instantaneously at some point between their invocation and completion. Once a write is acknowledged (ACK’ed), all subsequent reads will see that write or a later write, never an earlier state.
This guarantee is what makes distributed databases easy to reason about. Application developers can write code as if they were working with a single-node database, without worrying about stale reads, lost updates, or other anomalies common in weakly consistent systems. However, achieving linearizability in a distributed system requires careful coordination.
To provide linearizability, the database must ensure that operations are serialized in a globally consistent order. In order to do so, the system will acknowledge a write only after a quorum of replicas has agreed on the operation’s placement in the global sequence.
The Impossible Trinity: Performance, Availability, and Consistency
We can now state the fundamental impossibility precisely. Consider three desirable properties:
1. Low-latency writes within a region: Transactional operations complete in single-digit milliseconds, comparable to a single-region database.
2. Survival of complete regional failures: The system remains available and accepts writes even if an entire cloud region or data-center(s) becomes unavailable.
3. Strong consistency: All operations are linearizable, with no stale reads or anomalies.
You can achieve any two of these properties, but not all three simultaneously. This is not a limitation of any particular database system. It is a consequence of the mathematics of distributed consensus combined with the physics of communication.
So while no database can bend the laws of physics, the best ones help you work with them. CockroachDB gives you the control to balance performance, consistency, and resilience on your own terms.
Try CockroachDB Today
Spin up your first CockroachDB Cloud cluster in minutes. Start with $400 in free credits. Or get a free 30-day trial of CockroachDB Enterprise on self-hosted environments.