5 Min Read

Image: Alamy / Data Center Knowledge
For decades, the Network Time Protocol (NTP) has been the gold standard for synchronizing time across data center networks. But increasingly, that’s no longer the case. Modern networking demands greater precision than ever, and the venerable NTP often falls short.
To overcome this challenge, data centers are now migrating to alternative methods for maintaining network time, such as the Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronizati…
5 Min Read

Image: Alamy / Data Center Knowledge
For decades, the Network Time Protocol (NTP) has been the gold standard for synchronizing time across data center networks. But increasingly, that’s no longer the case. Modern networking demands greater precision than ever, and the venerable NTP often falls short.
To overcome this challenge, data centers are now migrating to alternative methods for maintaining network time, such as the Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronization accuracy.
What Is Network Time, and Why Does It Matter for Data Centers?
Network time refers to the synchronization of time across devices connected to a network.
To exchange data accurately and avoid errors, all devices on a network must be synchronized. Otherwise, issues can arise, such as one server mistakenly treating data it received from another server as late due to inconsistencies in each server’s time settings.
Keeping network time in sync is important on any network. But it’s especially critical in data centers, which are typically home to large numbers of network-connected devices, and where small inconsistencies in network timing could snowball into major network synchronization problems.
Related:HPE Unveils ’Self-Driving’ Networking Solutions with Juniper Mist Integration
The Limitations of NTP
Since the 1980s, the main approach to synchronizing time across networks has been the Network Time Protocol (NTP). NTP works in a simple, straightforward way: devices on a network periodically check in with an NTP server, which tells them the correct time. If the time on a device is not in sync with the “official” time according to the NTP server, the device updates its time settings.
NTP works very well in situations where networks can tolerate timing inconsistencies of up to a few milliseconds (meaning thousandths of a second). But beyond this, NTP-based time syncing is less reliable due to limitations such as:
Periodic check-ins: With NTP, devices only check in with NTP servers periodically – and often, several minutes can go by between verification events. In the meantime, devices may drift out of sync because their local clocks deviate from NTP-based time.
Network latency issues: Even if devices were to poll NTP servers very frequently, delays in the time it takes for the server to respond over the network could contribute to inaccurate time readings. They may be measured only in small fractions of a second, but that’s still a problem for data centers that require incredibly precise network timing.
Computational delays: Polling an NTP server and polling time data also takes time because it requires computational resources – and here again, although the delays may be extremely small, they can still matter.
Related:Google to Lead $120m Papua New Guinea subsea cable scheme
These challenges render NTP unsuitable for data centers hosting workloads that demand true real-time data exchange – such as those that support real-time financial trading, robotic process coordination or the management of autonomous vehicles. To support cases like these, data centers need to maintain time with microsecond-level accuracy. (A microsecond is one-thousandth of a millisecond.)
The Switch to PTP
Fortunately for data center operators facing challenges like these, there’s an alternative to NTP: the Precision Time Protocol (PTP).
Unlike NTP, PTP doesn’t rely solely on a server-client model for syncing time across networked devices. Instead, it uses time servers in conjunction with a method called hardware timestamping on client devices. Hardware timestamping involves specialized hardware components, usually embedded in network interface cards (NICs), to track time.
Central time servers still exist under PTP. But rather than having software on servers connect to the time servers, hardware devices optimized for the task do this work. The devices also include built-in clocks, allowing them to record time data faster than they could if they had to forward it to the generic clock on a server.
Related:Why Fiber Utilities Must Evolve or Risk Obsolescence
The result is that, in most cases, PTP is much faster and more reliable than NTP. While timing inconsistencies may still occur, they’re usually much smaller under PTP.
Comprehensive statistics about the use of PTP are elusive. But some sources suggest that in real-time financial trading, for example, 85% of workloads now rely on PTP. Others predict that the market for PTP solutions will grow by a CAGR of about 8.5% through 2031.
The Challenges of PTP in Data Centers
While PTP is more powerful and accurate than NTP, it’s also more complex (and, in many cases, more expensive) for data centers to implement. This is mainly because – whereas data center operators can implement an NTP server using just freely available software – PTP requires two key types of hardware components:
NICs with hardware timestamping capabilities. As noted above, these play a key role in enabling PTP-based time measurement.
A centralized, highly accurate clock that can feed time data to the PTP server. This is important because PTP is only accurate if the PTP server’s time readings are also accurate – and to achieve highly precise time readings, it works best to deploy a local time-keeping device, such as an atomic clock, directly within a data center. It’s also possible to retrieve time from remote time servers, but doing so usually introduces microsecond-level networking delays – and if your time readings are a few microseconds out of sync, you’ve defeated the whole purpose of using PTP in the first place.
On top of this, small inefficiencies in internal data center networking, such as congestion or packet loss issues that delay PTP time polling, can also make PTP inaccurate. Thus, to use PTP effectively, data center operators must optimize their networks and deploy the necessary hardware components.
None of this is easy. But again, it’s increasingly necessary for data centers aiming to redefine the meaning of “real time” and support workloads that just can’t tolerate the syncing delays of a network timing protocol designed forty years ago.
The Future of Time Synchronization in Data Centers
As workloads demand ever-greater precision, PTP is becoming the new standard for data center time synchronization. While its implementation is complex and costly, the benefits of microsecond-level accuracy far outweigh the challenges for data centers supporting real-time applications.
By investing in PTP, operators can future-proof their infrastructure and meet the demands of next-generation technologies.