If you wish to contribute or participate in the discussions about articles you are invited to contact the Editor

Generic Receiver Description

From Navipedia
Revision as of 18:44, 31 March 2011 by Rui.Sarnadas (talk | contribs)
Jump to navigation Jump to search


ReceiversReceivers
Title Generic Receiver Description
Author(s) GMV
Level Basic
Year of Publication 2011
Logo GMV.png

GNSS receivers are responsible for processing the L-band Signals In Space (SIS) coming from the GNSS satellites. Each satellite transmits a continuous signal in the GHz range, modulated by a periodic digital code (called a pseudo-random noise code, or PRN code), and further modulated with a data message. User receivers search for the presence of these radio signals that travel through space, and try to synchronize with them. This way, a GNSS receiver can be seen as a radionavigation user device that aims at tracking the GNSS signals, in order to correctly demodulate and extract the measurements and navigation information - one example is to decode the transmitted navigation message and calculate the user's position.

The following sections present an overview of a typical GNSS receiver structure and processing chain.

Receiver overview

Although receiver architectures are tailored to the different GNSS systems available and to different target applications, the basic building blocks of a generic GNSS receiver are as shown in Figure 1:

  • Antenna - L-band antenna, capturing GNSS signals, noise and possible interference.
  • Front End - The front-end typically down-converts, filters, amplifies and digitizes the incoming signals.
  • Applications Processing - Depending on the envisaged application, the receiver performs different tasks with the resulting GNSS information, and provides meaningful results to the user.
Figure 1: Generic Receiver Architecture. See this block diagram for a detailed view.

In a typical receiver implementation[1], the Signals In Space (SIS) arriving at the antenna are down-converted, filtered, and digitized in the front end section. This process ultimately generates a baseband representation of the desired GNSS spectrum, yielding the samples as real and complex components, namely I (In-Phase) and Q (Quadrature) components, in baseband.

Baseband signal processing gathers all the algorithms to find and follow a visible GNSS signal, by means of synchronization to the known PRN code, and removing errors as best as possible. This process is built around the principle of signal correlation: the incoming signal is repeatedly correlated with a replica of the expected PRN code, which is known a priori. To extract valid significance from the correlation, the local replica is generated in the receiver taking into account the signal's carrier frequency, code delay, Doppler frequency, and spreading code (which is unique to each satellite/signal).

Figure 2: Example of an acquisition process, showing the Doppler / code delay search space and correlation peak that indicates presence of a signal with code delay of 650 chips and Doppler frequency of -1750 Hz.

The correlation principle is first used to search for the satellites in view. After a receiver starts operating, it first needs to know which satellites are in view and can be tracked for extracting measurements. This process is known as acquisition, and is based on several correlations between the incoming signal and multiple replicas of the possible "expected" signals, generated for different code delays and Doppler frequencies. In fact, because the signal is originated by moving satellites, and travels through space at the speed of light, there is a Doppler[2] and code delay effect observed in the received signals. Therefore, the first unknown in detecting GNSS signals is the amount of delay and relative motion between the transmitted signal and the receiver. To search for the signals, different local replicas (corresponding to different code delay / Doppler frequency pairs) are generated and correlated with the input signals. If a correlation peak is observed for a given replica (see Figure 2), this means that there is a good chance that the signal with the spreading code used is visible, and the code delay and Doppler frequency estimates are passed on to the tracking process as a first estimate of the signal's delay.

In the tracking loops, correlations are also used to refine the local replica generation, so as to match as best as possible the incoming signal. The correlation results are then applied to aid different tracking loops in providing a measurement of tracking quality. Typically, the receiver tracks each signal using dedicated channels running in parallel, where each channel tracks one signal (i.e. for single frequency users, each channel tracks one satellite), providing pseudorange and phase measurements, as well as navigation data and additional signal information, such as carrier-to-noise ratio (C/N0). For details on the signal processing blocks, see the signal processing section.

After correctly tracking the signals, and returning the measurements and data to the application processing block, the receiver uses the information from the tracking loops for different purposes: from computing its own position and velocity, to performing time transfer, or simply collecting data to be post-processed in the ground stations. In addition to processing the SIS, GNSS receivers may also use aiding information to enhance their solution performance, as there are various architectural solutions to cope with aiding information. In fact, this information can be potentially used at any block of the receiver: as an example, when using Inertial Navigation Systems (INS), the sensor information is commonly used in the application processing block, although it could also be used as feedback to the baseband processing block for improved performance. For a wider discussion on application specifics, see GNSS Applications.

Trade-offs and limitations

The design and selection of a receiver type is tightly linked to the target user application: for example, a multi-constellation GNSS receiver will certainly improve solution availability (critical, for instance, in urban environments), whereas if the user application is focused on improved accuracies, then the selected receiver will probably turn to carrier-based technologies, or differential and augmented solutions. This choice is usually a result of trade-off analysis that take into account several (and often related) factors, such as target application, performance, accuracy, power consumption, or cost.

Deriving from the characteristics of radionavigation systems, space-to-Earth electromagnetic wave propagation, and receiver architectures, different receivers yield different accuracies and performances, and many times the specifications might seem dubious in comparison. As of today, there is still ongoing debate on the qualification and quantification of error measurements, receiver accuracy, and how they relate[3]. Nevertheless, pushed by the emergence of new services aimed at professional and safety of life users, standardization activities have already been launched at European level (CEN, CENELEC and ETSI), at global level (e.g. ICAO standard and recommended practices) and at industry level (e.g. industry standards, RTCA and EUROCAE MOPS/ MASPS). One example is the EGNOS Safety-of-Life signal certification for aviation[4].

Related articles

References