A GNSS simulator is an essential test instrument for dynamic platforms of all kinds. The simulator produces replica GNSS signals that are controllable and repeatable, allowing the assessment and qualification of PNT systems such as GNSS receivers.
For applications where PNT capabilities are safety- and liability-critical, one goal of the test regime should be to replicate the real-world GNSS signal environment as realistically as possible, accelerating development while maintaining the integrity of results.
As I noted in my, one factor that can affect test realism is latency in the test environment. Small delays in message transmission and simulator operations can have a negative impact on realism and introduce uncertainty into the results of tests conducted with hardware in the loop (HIL).
In this blog, I’ll look at where latency can occur in a HIL simulation environment, and provide some pointers on how to minimize it in tests. If you want to dive deeper, read our white paper.
Understanding latency in GNSS signal simulation
The main task of a GNSS signal simulator is to convert command inputs into analogue RF outputs. The realism of the simulation can be influenced by latency occurring at any point between the command creation and the output. The more delay there is, the less realistic the test, since the output trajectory and signal information will not match the intended truth. And if the amount of latency differs between test runs, as can be the case with lower-quality systems, the tests are not truly like-for-like and the results become less reliable.
It is in HIL test environments where latency becomes important. Here, realism requires a high level of synchronization between all of the hardware and simulators in the environment to maintain data and time coherence and ensure consistency across multiple test runs. As this can be complex to achieve, a good first step is always to determine whether you need to have hardware in the loop at all, or whether you can test the performance of the device under test in another way.
Determining latency in GNSS signal simulation
In my previous blog, I discussed the different components of latency in a HIL environment and how understanding these is critical to optimizing your testbed.
As a recap, the five different latency values of the simulation are:
Network latency: The interval between a message being sent from an external piece of equipment and it being received by the simulator.
Sampling uncertainty: The interval between the message being received at the simulator and it being processed by the simulator.
Update latency: The interval between the message being processed by the simulator and the model being updated in the simulator’s software.
Output latency: The interval between the model being updated in the simulator software and the signal being realized as an analog radio frequency (RF) output.
System latency: The combination of sampling uncertainty, update latency, and output latency present in the simulator.
Minimizing latency in the network
Once you understand where latency is occurring in the simulation, there are two key parts of the testbed where you can work to minimize it: the network and the simulator.
The first two types of latency shown above—network latency and sampling uncertainty—are dependent on the network configuration, rather than the signal simulator (though the update rate of the simulator is relevant in sampling uncertainty). When configuring the testbench, the choice of communications protocols, quality of cabling, and the performance of external units such as the Ethernet switch are important tools in minimizing overall latency.
However, there is one way the GNSS simulator can help in minimizing sampling uncertainty. If it provides an external one pulse per second (1 PPS) signal input/output aligned to the simulation update rate, you can use this to synchronize the simulator’s internal simulation cycles with the update rates in other systems and simulators in the testbench, ensuring that commands can be received as close to the next simulation cycle as possible.
Minimizing system latency in the GNSS simulator
In contrast to network latency and sampling uncertainty, system latency is wholly dependent on the specifications of the simulator. This value must be known in advance to ensure the test set-up meets requirements.
System latency depends on several factors, including the type of hardware used and the quality of internal cabling and connections. In addition, software algorithms are key in determining the quality and speed of model calculations.
It’s important to note that update rates and latency are not the same across all systems, and are not always consistent within individual simulators. In Spirent simulators, for example, system latency is highly consistent and is expressed as an integer of the SIR and HUR of the system in use. A Spirent simulator that operates at 2 kHz, such as the GSS9000, will always have a system latency of <2ms, but a different 1 kHz simulator could produce very different values.
Minimizing latency across multiple simulators
One complicating factor is the fact that a HIL test setup is likely to include multiple simulators, each with its own HUR and SIR. Here, the key to minimizing latency is to synchronize all of the simulators and to align the intervals at which messages are sent, received, and processed. It’s wise to interrogate systems under the full range of operating conditions to understand variance, tolerances, and the level of uncertainty within the loop. With these factors optimized across the environment, a HIL test rig can deliver on the efficiency that it promises.
This blog has provided an introduction to minimizing latency in a HIL test setup, but for a deeper dive, download our new white paper,.