TDD vs FDD again: low latency backhaul
In a recent blog I talked about the differences between TDD and FDD systems and how to compare system capacities correctly.
The other big difference between TDD and FDD systems is in the overall system latency.
Before going into the detailed differences, what do we mean by latency, and why is it important?
What is “good” latency, “ok” latency and “poor” latency in mobile backhaul terms?
Latency measures how long it takes a packet of data to travel from one point in the network to another.
It’s very common in mobile networks to talk about the round-trip latency between a node B or e-node B (at the edge of the operator’s RAN) and the packet core.
As the ‘round-trip’ suggests, this is the time taken for a packet to transit from the node B to the core and for the response to come back, not including any time processing the packet and generating the response. This is same as the ‘ping time’ you often hear gamers talking about.
Round-trip latency is an important design parameter for modern mobile networks because it has a very large effect on the end user’s perceived quality of experience (QoE).
We’ve all experienced the ‘lag’ when our smartphone first tries to access the data network.
Reducing bearer access latency on the handset to network interface (the Uu interface) in order to improve QoE was a major design goal of LTE.
This has been so successful that backhaul latency is now under the spotlight.
The NGMN Alliance recommendation in its document NGMN Optimised Backhaul Requirements is that the total round-trip latency budget for the network between a node B and the packet core must be 10ms or less, and should be less than 5ms.
The total of this budget allocated to the tail link backhaul, therefore, has to be a small proportion of this budget.
The recent 'Backhaul technologies for small cells' study from the Small Cell Forum classifies backhaul system latency as follows:
Latency (ms) |
Category |
< 1ms |
Good |
1-5ms |
Ok |
> 5ms |
Poor |
So what does this have to do with FDD and TDD systems?
A TDD system uses the same frequency for upstream and downstream transmissions.
So at either end of the link, a radio is essentially in “send” mode or “receive” mode.
What happens when a packet arrives at the radio link that we want to send, but the radio is in “receive” mode? Well, simply enough, it has to wait until the radio is back in “send” mode.
In a round-trip, the packet will have to wait for the radio to be in “send” mode twice!
In contrast, in an FDD system, we are simultaneously in “send” mode on one frequency and “receive” mode on another. So when a packet arrives at the radio link we can send it immediately.
For this reason, FDD systems in general have lower latency than TDD systems.
VectaStar, the market leader in multipoint microwave, has an average round-trip latency of 0.7ms.
In comparison, TDD systems quote figures from 4ms to 12ms one-way.
Equally importantly, the amount of delay variation introduced by FDD backhaul is lower.
This is important when we use packet timing techniques for synchronisation, but that’s a topic for another day.