If you saw old (and not so old) movies, in which a group of people put together a plan for a robbery, it should be familiar to you that they get together before starting in order to synchronize their watches. If, in addition, you had a watch that was not smart, you should be clear that this synchronization is necessary for three reasons: each one is adjusted by hand, probably using different references and, above all, each one adds a different time lag that accumulates over time, to the point of being several seconds or even minutes late for a month.
That's a problem from another era. Today, if I wonder what time it is, I look at my cell phone or smart watch and I have no doubt that it is the exact time. This was achieved by correcting the three problems mentioned above: no longer adjusting by hand, no longer using different references, and now constantly correcting for the natural delays of each system.
But, despite these adjustments and the precision we require as users when we consult the time, there are functionalities that demand greater precision. What are these applications? How have the various synchronization problems been solved? How does the arrival of 5G impact?
Basic accuracy for end users
The solution to the first problem (manual adjustments) is clear: the adjustment is done automatically through the network, thus eliminating human error at the time of synchronization. But where does the network get the time? That is answered with the solution to the second and third problem: the standardization of protocols that take a reference clock to distribute the time through the packet network.
In the case of our devices, this standardization takes the form of NTP(Network Time Protocol ). NTP uses primary servers to receive the reference time for synchronization. That reference can be obtained from a cesium clock, from another NTP system or from some global navigation satellite system (GNSS); the most common is GPS, owned by the United States. Some of its counterparts are GLONASS (Russia) and Galileo (European Union). In this way, the use of a single reference is solved, while the problem of the delays of each device is solved by consulting that time with certain frequency and adjusting the difference.
Primary NTP servers send the signal to secondary servers through the telecommunications network, which then distribute it to the clients (the devices). Each hop between servers and nodes to reach the client adds a small delay to the clock signal, caused by both the propagation time of the clock signal (delay) and the difference between clocks (offset). These delays cause NTP clients to receive the time with an accuracy that can differ by as much as a few milliseconds.
This accuracy is more than enough to know if I am on time for a meeting or if I have to take food out of the oven, so the NTP solution is suitable for use in end devices, but we cannot say the same for all applications.
Precision requirements for telecommunication networks
When it comes to telecommunications networks, there are functionalities that require accuracies much higher than milliseconds. Let's explore an example: wireless communication using the TDD (Time Division Duplex) technique, present in some LTE systems and in 5G.
To understand it, let's first talk about its alternative: FDD (Frequency Division Duplex). The basic requirement is that, in the data exchange between your device and the internet provider's radio station (the radio base), there has to be one way to download data (from the radio base to the device) and one way to upload data (in the reverse path). Using FDD, a downstream channel is reserved on a certain frequency and an upstream channel on a different frequency, so the two communications can occur simultaneously.
What TDD does to separate upstream and downstream is to divide the time into small windows and assign some of them to upstream and others to downstream. This improves the efficiency of the frequency spectrum, since only one channel is used, as well as giving the possibility of dynamically allocating resources to uplink or downlink according to demand. Its main challenge lies in the precision with which the base stations mark the start of these windows.
Cellular coverage in an area is achieved by deploying separate base stations in smaller regions. Since the transmission powers of the base stations are high to reach all devices in their region, it is expected that the transmission from one base station will be received by a neighboring base station.
What would happen if one of the base stations had its clock out of phase, for example, during an entire TDD window? What one interprets as downstream could be received by its neighbor as upstream and generate significant interference in the network.
How do we get more accurate clocks?
To remedy clock mismatches when the required accuracy is higher,Precision Time Protocol ( PTP ) is standardized and started to be implemented for LTE systems. PTP enables clock synchronization in distributed systems with high accuracy using a master-slave approach, in which a master clock sends time signals to slave clocks over the network. The slave clocks adjust their local clocks to keep them synchronized with the master clock.
The master, similar to the primary NTP server, takes the reference signal to synchronize the time from a GNSS receiver. From this the question may arise: why use PTP to distribute over the network if I can receive a GPS signal from any radio base? The question is not only valid, but also gives rise to an existing synchronism solution on the market, called GNSS Everywhere.
Although it has the advantage of not losing accuracy by not having to propagate, its main disadvantage is the possibility of losing the GNSS satellite signal due to bad weather or poor line of sight between the base and the satellites. If a radio base loses the signal, it can start to interfere with all adjacent bases, so it must be decommissioned.
In contrast, PTP clocks have a high-precision internal oscillator that can maintain the clock signal for hours or even days after the GNSS signal is no longer received, which makes them robust to circumstantial problems with reception. In addition, GNSS reception is centralized at a few points which are then distributed throughout the network, without requiring receivers at each node, thus reducing costs.
What changes with 5G?
In NTP, we are talking about millisecond accuracy. In 3G, the accuracy requirements establish a maximum error of 10 microseconds (a thousand times less). In 4G or LTE, it is 5 microseconds. In 5G, the maximum error that base stations can have in order not to generate interference is 1.5 microseconds from the clock signal to the base station transmission, including all intermediate hops.
Conditions are more restrictive than ever due to the new technologies enabled by the 5G paradigm. The substantial reduction of the time each communication takes in the network (latency), the use of transmission techniques to each mobile user with high directivity (beamforming) and the increase of frequency resources to improve the bandwidth of a communication (carrier aggregation) are examples that raise the bar in terms of synchronism and translate into a better user experience and, in general, a breakthrough in telecommunications.
Accuracy requirements are becoming increasingly high, and this makes synchronization systems a fundamental part of the operation of networks, both telecommunications and other systems (including utilities). This is why it is necessary to resort to synchronization solutions that meet the requirements to ensure smooth and reliable connectivity. In addition, the vulnerabilities generated by the dependence on GNSS signals must be addressed, but that is a topic for another article.
Pablo is an Electrical Engineer with a specialization in telecommunications and a Master's student in Electrical Engineering at the Universidad de la República. He has more than 7 years of experience in the telecommunications field and works in pre-sales, implementation and project support.