7.2 Linear Predictive Techniques


7.2.1 Signal Models

We next refine the model of (7.1) to account more completely for the structure of the useful data signal { S ( t )} and of the narrowband interference { I ( t )}. It is the exploitation of such structure that has led to many of the improvements in NBI suppression that have been developed in the past decade .

Let us, then, reconsider the model (7.1) and examine its components in more detail. (These components are assumed throughout to be independent of one another.) We first consider the useful data signal { S ( t )}. In this chapter we treat primarly the case in which this signal is a multiuser, linearly modulated, digital communications signal in the real baseband, which can be written more explicitly as (see also Chapter 2)

Equation 7.2

graphics/07equ002.gif


where K is the number of active (wideband) users in the channel, M is the number of symbols per user in a data frame of interest, b k [ i ] is the i th binary ( ±1) symbol transmitted by user k , A k > 0 and t k are the respective amplitude and delay with which user k 's signal is received, s k ( t ) is user k 's normalized ( graphics/390equ02.gif ) transmitted waveform, and 1/ T is the per-user symbol rate. It is also assumed that the support of s k ( t ) is completely within the interval [0, T ]. The signaling waveforms are assumed to be direct-sequence spread-spectrum signals of the form

Equation 7.3

graphics/07equ003.gif


where N , { c 0,k , c 1,k , ..., c N “1, k }, and 1/ T c are the respective spreading ratio, binary ( ±1) spreading code, and chip rate of the spread-spectrum signal { s k ( t )}; and where y ( t ) is a unit-energy pulse of duration T c .

It should be noted that this model accounts for asynchrony and slow fading but not for other possible channel features and impairments, such as multipath, dispersion, carrier offsets, multiple antennas, aperiodic spreading codes, fast fading, higher-order signaling, and so on. All of these phenomena can be incorporated into a more general model for a linearly modulated signal in the complex baseband:

Equation 7.4

graphics/07equ004.gif


in which the symbols { b k [ i ]} are complex and f i,k ( t ) is the (possibly vector-valued) waveform received from user k in the i th symbol interval. Here, the collection of waveforms { f i,k ( t ): i = 0, 1, ..., M “1; k = 1, 2, ..., K } contains all information about the signaling waveforms transmitted by the users and all information about the channels intervening the users and the receiver (see Chapter 1). Many of the results discussed in this chapter can be transferred directly to this more general model, although we will not always explicitly mention such generalizations .

It is also of interest to model more explicitly the narrowband interference signal { I ( t )} appearing in (7.1). Here, we can consider three basic types of NBI: tonal signals, narrowband digital communication signals, and entropic narrowband stochastic processes. Tonal signals are those that consist of the sum of pure sinusoidal signals. These signals are useful for modeling tone jammers and other harmonic interference phenomena. Narrowband digital communication signals generalize tonal signals to include digitally modulated carriers . This leads to signals with nonzero-bandwidth components, and as we will see in the sequel, the digital signaling structure can be exploited to improve the NBI suppression capability. Less structure can be assumed by modeling the NBI as entropic narrowband stochastic processes such as narrowband autoregressions. Such processes do not have specific deterministic structure. Typical models that can be used in this framework are ideal narrowband processes (with brick-wall spectra) or processes generated by linear stochastic models. Further discussion of the details of these models is deferred until they arise in the following sections. Finally, for convenience, we will assume almost exclusively that the ambient noise { N ( t )} is a white Gaussian process, although in the following section we mention briefly the situation in which this noise may have impulsive components.

As noted in Section 7.1, narrowband signals can be suppressed from wideband signals by exploiting the difference in predictability in these two types of signals. In this section we develop this idea in more detail. To focus on this issue, we consider the specific situation of (7.1) “(7.2), in which there is only a single spread-spectrum signal in the channel (i.e., K = 1). It is also useful to convert the continuous-time signal of (7.1) to discrete time by passing it through an arrangement of a filter matched to the chip waveform y ( t ), followed by a chip-rate sampler. That is, we convert the signal (7.1) to a discrete-time signal

Equation 7.5

graphics/07equ005.gif


where { c n }, { i n }, and { u n } represent the converted spread-spectrum data signal, narrowband interferer, and white Gaussian noise, respectively. Note that for the single-user channel ( K = 1) and in the absence of NBI, a sufficient statistic for detecting the data bit b 1 [ i ] is the signaling-waveform matched-filter output

Equation 7.6

graphics/07equ006.gif


which can be written in terms of this sampled signal as

Equation 7.7

graphics/07equ007.gif


Thus, this conversion to discrete time can be thought of as an intermediate step in the calculation of the sufficient statistic vector and is therefore lossless in the absence of NBI.

Narrowband interference suppression in this type of signal can be based on the following idea. Since the spread-spectrum signal has a nearly flat spectrum, it cannot be predicted accurately from its past values (unless, of course, we were to make use of our knowledge of the spreading code, as discussed in Section 7.4). On the other hand, the interfering signal, being narrowband, can be predicted accurately. Hence, a prediction of the received signal based on previously received values will, in effect, be an estimate of the narrowband interfering signal. Thus, by subtracting a prediction of the received signal obtained at each sampling instant from the signal received during the subsequent instant and using the resulting prediction error as the input to the matched filter (7.7), the effect of the interfering signal can be reduced. Thus, in such a scheme the signal { r n } is replaced in the matched filter (7.7) by the prediction residual { r n graphics/rncap.gif }, where graphics/rncap.gif denotes the prediction of the received signal at time n , and the data detection scheme becomes

Equation 7.8

graphics/07equ008.gif


7.2.2 Linear Predictive Methods

This technique for narrowband interference suppression has been explored in detail through the use of fixed and adaptive linear predictors (e.g., [17, 19, 20, 37, 199, 209, 210, 221, 228, 235, 253, 255, 310, 312, 327, 346, 366, 451, 480]; see [6, 244, 327, 332] for reviews). Two basic architectures for fixed linear predictors are Kalman “Bucy predictors, based on a state-space model for the interference, and finite-impulse-response (FIR) linear predictors, based on a tapped-delay-line structure.

Kalman “Bucy Predictors

To use Kalman “Bucy prediction (cf. [235]) in this application, it is useful to model the narrowband interference as a p th order Gaussian autoregressive [AR( p )] process:

Equation 7.9

graphics/07equ009.gif


where { e n } is a white Gaussian sequence, graphics/393fig01a.gif , and where the AR parameters F 1 , F 2 , ..., F p are assumed to be constant or slowly varying.

Under this model, the received discrete-time signal (7.5) has a state-space representation as follows ( assuming one spread-spectrum user, i.e., K = 1):

Equation 7.10

graphics/07equ010.gif


Equation 7.11

graphics/07equ011.gif


where

Equation 7.12

graphics/07equ012.gif


with

graphics/393fig02.gif


Given this state-space formalism, the linear minimum mean-square-error (MMSE) prediction of the received signal (and hence of the interference) can be computed recursively via the Kalman “Bucy filtering equations (e.g., [377]), which predicts the n th observation r n , as graphics/393equ01.gif , where graphics/393equ02.gif denotes the state prediction in (7.10) “(7.11), given recursively through the update equations

Equation 7.13

graphics/07equ013.gif


Equation 7.14

graphics/07equ014.gif


with graphics/394equ01.gif , denoting the variance of the prediction residual, and where the matrix M n (which is the covariance of the state prediction error graphics/394equ02.gif is computed via the recursion

Equation 7.15

graphics/07equ015.gif


Equation 7.16

graphics/07equ016.gif


with

Equation 7.17

graphics/07equ017.gif


where e 1 denotes a p -vector with all entries being zeros, except for the first entry, which is 1. The Kalman “Bucy prediction-based NBI suppression algorithm based on the state-space model (7.10) “(7.11) is summarized as follows. (Note that it is assumed that the model parameters are known.)

Algorithm 7.1: [Kalman “Bucy prediction-based NBI suppression] At time i, N received samples { r iN , r iN+1 , ..., r iN+N-1 } are obtained at the chip-matched filter output (7.5) .

  • For n = iN, iN + 1, ..., iN + N - 1 perform the following steps:

    Equation 7.18

    graphics/07equ018.gif


    Equation 7.19

    graphics/07equ019.gif


    Equation 7.20

    graphics/07equ020.gif


    Equation 7.21

    graphics/07equ021.gif


    Equation 7.22

    graphics/07equ022.gif


    Equation 7.23

    graphics/07equ023.gif


  • Detect the ith bit b 1 [ i ] according to

    Equation 7.24

    graphics/07equ024.gif


Linear FIR Predictor

The Kalman “Bucy filter is, of course, an infinite-impulse-response (IIR) filter. A simpler linear structure is a tapped-delay-line (TDL) configuration, which makes one-step predictions via the FIR filter

Equation 7.25

graphics/07equ025.gif


where L is the data length used by the predictor, and a 1 , a 2 , ..., a L , are tap weights. In the stationary case, the tap weights can be chosen optimally via the Levinson algorithm (see, e.g., [377]). More important, though, the FIR structure (7.25) can easily be adapted using, for example, the least-mean-squares (LMS) algorithm (e.g., [463]). Denote graphics/395equ01.gif . Let a [ n ] denote the tap-weight vector to be applied at the n th chip sample (i.e., to predict r n+1 ). Also denote graphics/395equ02.gif . Then the predictor coefficients can be updated according to

Equation 7.26

graphics/07equ026.gif


where m is a tuning constant. Although the Kalman “Bucy filter can also be adapted, the ease and stability with which the FIR structure can be adapted makes it a useful choice for this application. To make the choice of tuning constant invariant to changes in the input signal levels, the LMS algorithm (7.26) can be normalized as follows:

Equation 7.27

graphics/07equ027.gif


where p n is an estimate of the input power obtained by

Equation 7.28

graphics/07equ028.gif


The estimate of the signal power p n is an exponentially weighted estimate. The constant m is chosen small enough to ensure convergence, and the initial condition p should be large enough so that the denominator never shrinks so small as to make the step size large enough for the adaptation to become unstable.

A block diagram of a TDL-based linear predictor is shown in Fig. 7.4. The LMS linear prediction-based NBI suppression algorithm is summarized as follows.

Figure 7.4. Tapped-delay-line linear predictor.

graphics/07fig04.gif

Algorithm 7.2: [LMS linear prediction-based NBI suppression] At time i, N received samples {r iN , r iN+1 , ..., r iN+N-1 } are obtained at the chip-matched filter output (7.5) .

  • For n = iN, iN + 1, ... , iN + N - 1 perform the following steps:

    Equation 7.29

    graphics/07equ029.gif


    Equation 7.30

    graphics/07equ030.gif


    Equation 7.31

    graphics/07equ031.gif


  • Detect the ith bit b 1 [ i ] according to

    Equation 7.32

    graphics/07equ032.gif


Performance and convergence analyses of these types of linear predictor “subtractor systems have shown that considerable signal-to-interference-plus-noise ratio (SINR) improvement can be obtained by these methods. (See the above-cited references and the results in Section 7.4.) Linear interpolation filters can also be used in this context, leading to further improvements in SINR and to better phase characteristics compared with linear prediction filters (e.g., [311]). For example, a simple linear interpolator of order L 1 + L 2 for estimating r n is given by

Equation 7.33

graphics/07equ033.gif


where a - L 1 , ..., a L 2 , are tap weights. Such an interpolator can be adapted similarly via the LMS algorithm.



Wireless Communication Systems
Wireless Communication Systems: Advanced Techniques for Signal Reception (paperback)
ISBN: 0137020805
EAN: 2147483647
Year: 2003
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net