9.5 Adaptive SMC Receivers for Flat-Fading Channels


In this section we describe an adaptive receiver technique for signal reception and decoding in flat-fading channels based on a Bayesian formulation of the problem and the sequential Monte Carlo filtering technique outlined in Chapter 8. The techniques presented in this section were first developed in [76]. The basic idea is to treat the transmitted signals as missing data and to impute multiple samples of them sequentially based on the current observation. The importance weight for each of the imputed signal sequences is computed according to its relative ability in predicting the future observation. Then the imputed signal sequences, together with their importance weights, can be used to approximate the Bayesian estimates of the transmitted signals and the fading coefficients of the channel. The novel features of such an approach include the following:

  • The algorithm is self-adaptive and no training/pilot symbols or decision feedback is needed.

  • The tracking of fading channels and the estimation of data symbols are naturally integrated.

  • The ambient channel noise can be either Gaussian or impulsive.

  • If the system employs channel coding, the coded signal structure can easily be exploited to substantially improve the accuracy of both channel and data estimation.

  • The resulting receiver structure exhibits massive parallelism and is ideally suited for high-speed parallel implementation using VLSI systolic array technology.

9.5.1 System Description

We consider a channel-coded communication system signaling through a flat-fading channel with additive ambient noise. The block diagram of such a system is shown in Fig. 9.12. The input binary information bits { d t } are encoded using a channel code, resulting in a code-bit stream { b t }. The code bits are passed to a symbol mapper, yielding complex data symbols { s t } which take values from a finite alphabet A = { a 1 , . . ., a A }. Each symbol is then transmitted through a flat-fading channel whose input “output relationship is given by

Figure 9.12. Coded communication system signaling through a flat-fading channel.

graphics/09fig12.gif

Equation 9.81

graphics/09equ081.gif


where y t , a t , s t , and n t are the received signal, fading channel coefficient, transmitted symbol, and ambient additive noise at time t , respectively. The processes { a t }, { s t }, and { n t } are assumed to be mutually independent. Note that, although (9.81) is exactly the same flat-fading channel model as (9.19), here we use a different notation for the sake of notational clarity in describing the sequential Monte Carlo algorithms later.

It is assumed that the additive noise { n t } is a sequence of i.i.d. zero-mean complex random variables . In this section we consider two types of noise distributions. In the first type, n t assumes a complex Gaussian distribution,

Equation 9.82

graphics/09equ082.gif


whereas in the second type, n t follows a two- term mixture Gaussian distribution,

Equation 9.83

graphics/09equ083.gif


where 0 < < 1 and graphics/529fig01.gif . Here, as in Chapters 4 and 8, the term graphics/529fig02.gif represents the nominal ambient noise and the term graphics/529fig03.gif represents an impulsive component. The probability that impulses occur is . Note that the overall variance of the noise is graphics/529fig04.gif .

It is further assumed that the channel-fading process is Rayleigh; that is, the fading coefficients { a t } form a complex zero-mean Gaussian process. Moreover, to model the temporal behavior of the fading, we assume that it can be represented by the output of a lowpass Butterworth filter of order r driven by white Gaussian noise,

Equation 9.84

graphics/09equ084.gif


where D is the back-shift operator graphics/529fig05.gif ;

Equation 9.85

graphics/09equ085.gif


Equation 9.86

graphics/09equ086.gif


and { u t } is a white complex Gaussian noise sequence with unit variance and independent real and complex components . The coefficients { f i } and { q i }, as well as the order r of the Butterworth filter, are chosen so that the transfer function of the filter matches the power spectral density of the fading process, which in turn is determined by the channel Doppler frequency. In this section we assume that the statistical properties of the fading process are known a priori . Consequently, the order and coefficients of the Butterworth filter in (9.84) are known.

We next write system (9.81) and (9.84) in state-space form, which is instrumental in developing the adaptive Bayesian receiver. Define

Equation 9.87

graphics/09equ087.gif


Denote graphics/529fig06.gif . By (9.84) we then have

Equation 9.88

graphics/09equ088.gif


where

graphics/529equ01.gif

Because of (9.87), the fading coefficient sequence { a t } can be written as

Equation 9.89

graphics/09equ089.gif


If the additive noise in (9.81) is Gaussian [i.e., n t ~ N c (0, s 2 )], we have the following state-space model for the system defined by (9.81) and (9.84):

Equation 9.90

graphics/09equ090.gif


Equation 9.91

graphics/09equ091.gif


where { n t } in (9.91) is a white complex Gaussian noise sequence with unit variance and independent real and imaginary components.

On the other hand, if the additive noise in (9.81) is impulsive and is modeled by (9.83) as in Chapter 8, we introduce an indicator random variable I t , t = 0, 1, . . . :

Equation 9.92

graphics/09equ092.gif


with P ( I t = 1) = and P ( I t = 2) = 1 “ . Because n t is an i.i.d. sequence, so is I t . We then have the state-space signal model for this case given by

Equation 9.93

graphics/09equ093.gif


Equation 9.94

graphics/09equ094.gif


We now look at the problem of online estimation of the symbol s t and the channel coefficient a t based on the received signals up to time graphics/530fig01.gif . Consider the simple case when the ambient channel noise is Gaussian and the symbols are independent and identically distributed uniformly a priori [i.e., p ( s i ) = 1/ A ]. Then the problem becomes one of making Bayesian inferences with respect to the posterior distribution

Equation 9.95

graphics/09equ095.gif


For example, an online symbol estimation can be obtained from the marginal posterior distribution p ( s t y , . . ., y t ), and an online channel state estimation can be obtained from the marginal posterior distribution p ( x t y , . . ., y t ). Although the joint distribution (9.95) can be written out explicitly up to a normalizing constant, computation of the corresponding marginal distributions involves very high dimensional integration and is infeasible in practice. An effective approach to this problem is the sequential Monte Carlo filtering technique discussed in Chapter 8.

9.5.2 Adaptive Receiver in Fading Gaussian Noise Channels: Uncoded Case

MKF-Based Sequential Monte Carlo Receiver

Consider the flat-fading channel with additive Gaussian noise given by (9.90) and (9.91). Denote graphics/531fig01.gif and graphics/531fig02.gif . We first consider the case of an uncoded system, where the transmitted symbols are assumed to be independent:

Equation 9.96

graphics/09equ096.gif


When no prior information about the symbols is available, the symbols are assumed to take each possible value in A with equal probability [i.e., P ( s t = a i ) = 1/ A for i = 1, . . ., A ]. We are interested in estimating the symbol s t and the channel coefficient a t = h H x t at time t based on the observation Y t . The Bayesian solution to this problem requires the posterior distribution

Equation 9.97

graphics/09equ097.gif


Note that with a given S t , the state-space model (9.90)-(9.91) becomes a linear Gaussian system. Hence,

Equation 9.98

graphics/09equ098.gif


where the mean m t ( S t ) and covariance matrix S t ( S t ) can be obtained by a Kalman filter with the given S t .

To implement the MKF, we need to obtain a set of Monte Carlo samples of the transmitted symbols, graphics/531fig03.gif , properly weighted with respect to the distribution p ( S t Y t ). Then for any integrable function h ( x t , s t ), we can approximate the quantity of interest E { h ( x t , s t ) Y t } as follows:

Equation 9.99

graphics/09equ099.gif


Equation 9.100

graphics/09equ100.gif


Equation 9.101

graphics/09equ101.gif


with

Equation 9.102

graphics/09equ102.gif


where (9.99) follows from (9.97); (9.100) follows from (9.98); and in (9.100), f ( ·; m , S ) denotes a complex Gaussian density function with mean m and covariance matrix S . In particular, the MMSE channel estimate is given by

Equation 9.103

graphics/09equ103.gif


In other words, we can let graphics/532fig01.gif , implying that graphics/532fig02.gif in (9.100). Moreover, the a posteriori symbol probability can be estimated as

Equation 9.104

graphics/09equ104.gif


where d (.) is an indicator function such that d ( s t = a i ) = 1 if s t = a i , and d ( s t = a i ) = 0 otherwise . This corresponds to having graphics/532fig03.gif and graphics/532fig04.gif d ( s t = a i ).

Note that a hard decision on the symbol s t is obtained by

Equation 9.105

graphics/09equ105.gif


When MPSK signals are transmitted [i.e., a i = exp ( j (2 p i / A )) for i = 0, . . ., A - 1, where graphics/532fig05.gif , the estimated symbol t may have a phase ambiguity. For instance, for BPSK signals, s t {+1, -1}. It is easily seen from (9.81) that if both the symbol sequence { s t } and the channel value sequence { a t } are phase-shifted by p (resulting in {- s t } and {- a t }, respectively), no change is incurred in the observed signal { y t }. Alternatively, in the state-space model (9.90)-(9.91), a phase shift of p on both the symbol sequence { s t } and the state sequence { x t } yields the same model for the observations. Hence such a phase ambiguity necessitates the use of differential encoding and decoding.

Hereafter, we let graphics/532fig06.gif , and graphics/532fig07.gif . By applying the MKF techniques outlined in Section 8.3 to the flat-fading channel system, we describe the following algorithm for generating properly weighted Monte Carlo samples graphics/532fig08.gif .

Algorithm 9.5: [SMC for adaptive detection in flat-fading channels ”Gaussian noise]

  • Initialization: Each Kalman filter is initialized as graphics/532fig09.gif , with graphics/532fig10.gif , j = 1, . . ., m, where S is the stationary covariance of x t and is computed analytically from (9.87). (The factor 2 is to accommodate the initial uncertainty.) All importance weights are initialized as graphics/533fig01.gif , j = 1, ..., m. Since the data symbols are assumed to be independent, initial symbols are not needed.

    Based on the state-space model (9.90) “(9.91), the following steps are implemented at time t to update each weighted sample. For j = 1, ..., m:

  • Compute the one-step predictive update of each Kalman filter graphics/533fig02.gif :

    Equation 9.106

    graphics/09equ106.gif


    Equation 9.107

    graphics/09equ107.gif


    Equation 9.108

    graphics/09equ108.gif


  • Compute the trial sampling density: For each a i A compute

    Equation 9.109

    graphics/09equ109.gif


    where (9.109) holds because s t is independent of S t “1 and Y t “1 . Furthermore, we observe that

    Equation 9.110

    graphics/09equ110.gif


  • Impute the symbol s t : Draw graphics/533fig03.gif from the set A with probability

    Equation 9.111

    graphics/09equ111.gif


    Append graphics/533fig03.gif to graphics/533fig04.gif and obtain graphics/533fig05.gif .

  • Compute the importance weight:

    Equation 9.112

    graphics/09equ112.gif


    where (9.112) follows from (9.109).

  • Compute the one-step filtering update of the Kalman filter graphics/533fig02.gif : Based on the imputed symbol graphics/533fig03.gif and the observation y t , complete the Kalman filter update to obtain graphics/534fig01.gif , as follows:

    Equation 9.113

    graphics/09equ113.gif


    Equation 9.114

    graphics/09equ114.gif


  • Perform resampling according to Algorithm 8.9 when graphics/534fig02.gif in (8.103) is below a threshold.

The correctness of Algorithm 9.5 is stated by the following result, whose proof is given in the Appendix (Section 9.6.1).

Proposition 9.1: The samples graphics/534fig03.gif drawn by Algorithm 9.5 are properly weighted with respect to p ( S t Y t ), provided that graphics/534fig04.gif are proper at time ( t “ 1).

Algorithm 9.5 is depicted in Fig. 9.13. It is seen that at any time t , the only quantities that need to be stored are graphics/534fig05.gif . At each time t , the dominant computation in this receiver involves the m one-step Kalman filter updates. Since the m samplers operate independently and in parallel, such a sequential Monte Carlo receiver is well suited for massively parallel implementation using VLSI systolic array technology [241].

Figure 9.13. Adaptive Bayesian receiver for flat-fading Gaussian noise channels based on mixture Kalman filtering.

graphics/09fig13.gif

9.5.3 Delayed Estimation

Since the fading process is highly correlated, the future received signals contain information about current data and channel state. A delayed estimate is usually more accurate than the concurrent estimate. This is true for any channel with memory and is especially prominent when the transmitted symbols are coded, in which case not only the channel states but also the data symbols are highly correlated. In delayed estimation, instead of making an inference on ( x t , s t ) instantaneously with the posterior distribution p ( x t , s t Y t ), we delay this inference to a later time ( t + D ), D 0, with the distribution p ( x t , s t Y t + D ). Here we discuss two methods for delayed estimation: the delayed-weight method and the delayed-sample method.

Delayed-Weight Method

From Algorithm 9.5 we note by induction that if the set graphics/534fig06.gif is properly weighted with respect to p ( S t Y t ), the set graphics/534fig07.gif is properly weighted with respect to p ( S t + d Y t + d ), d > 0. Hence, if we focus our attention on S t at time ( t + d ) and let h ( x t , s t ) = d ( s t = a i ) as in (9.104), we obtain a delayed estimate of the symbol:

Equation 9.115

graphics/09equ115.gif


Since the weights graphics/536fig01.gif contain information about the signals ( y t+1 , ..., y t + d ), the estimate in (9.115) is usually more accurate. Note that such a delayed estimation method incurs no additional computation, but it requires extra memory for storing graphics/536fig02.gif . As will be seen in the simulation examples, for uncoded systems this simple delayed-weight method is quite effective for improving the detection performance over the concurrent method. However, for coded systems, this method is not sufficient for exploiting the constraint structures of both the channel and the symbols, and we must resort to the delayed-sample method, which is described next.

Delayed-Sample Method

An alternative method is to generate both the delayed samples and the weights graphics/536fig03.gif based on the signals Y t + D , hence making p ( S t Y t + D ) the target distribution at time ( t + D ). This procedure will provide better Monte Carlo samples since it uses future information ( y t+1 , ..., y t + D ) in generating the current sample of s t . But the algorithm is also more demanding both analytically and computationally because of the need to marginalize out s t+d for d = 1, ..., D .

For each possible "future" symbol sequence at time t + D “ 1 [i.e., ( s t , s t +1 , ..., s t + D “1 ) A D (a total of A D possibilities)], we keep the value of a D -step Kalman filter graphics/536fig04.gif , where

graphics/536equ01.gif

with graphics/536fig05.gif . Denote

graphics/536equ02.gif

The following is the delayed-sample algorithm for adaptive detection in flat-fading channels with Gaussian noise.

Algorithm 9.6: [Delayed-sample SMC algorithm for adaptive detection in flat-fading channels ”Gaussian noise]

  • Initialization: Each Kalman filter is initialized as graphics/536fig06.gif , with graphics/536fig07.gif and graphics/536fig08.gif where S is the stationary covariance of x t . All importance weights are initialized as graphics/537fig01.gif , j = 1, ..., m. Since the data symbols are assumed to be independent, initial symbols are not needed.

    At time ( t + D ), we perform the following updates for j = 1, ..., m to propagate from the sample graphics/537fig02.gif , properly weighted for p ( S t “1 Y t + D “1 ), to that for p ( S t “1 Y t + D ).

  • Compute the one-step predictive update for each of the A D Kalman filters: For each graphics/537fig03.gif , perform the update on the Kalman filter graphics/537fig04.gif , according to equations (9.106) “(9.108) to obtain graphics/537fig05.gif and graphics/537fig06.gif . (Here we make it explicit that these quantities are functions of graphics/537fig07.gif .)

  • Compute the trial sampling density: For each a i A , compute

    Equation 9.116

    graphics/09equ116.gif


  • Impute the symbol s t : Draw graphics/537fig08.gif with probability

    Equation 9.117

    graphics/09equ117.gif


    Append graphics/537fig08.gif to graphics/537fig09.gif and obtain graphics/537fig08.gif .

  • Compute the importance weight:

    Equation 9.118

    graphics/09equ118.gif


    Equation 9.119

    graphics/09equ119.gif


    where

    Equation 9.120

    graphics/09equ120.gif


  • Compute the one-step filtering update for each of the A D Kalman filters: Using the values of graphics/537fig08.gif and y t + D , for each graphics/538fig01.gif perform a one-step filtering update on the Kalman filter graphics/538fig02.gif according to (9.113) “(9.114) to obtain

    graphics/538equ01.gif

    With this and the subset of graphics/538fig03.gif corresponding to the sample graphics/537fig08.gif , which has been obtained in the previous iteration, form the new filter class graphics/538fig04.gif .

  • Perform resampling according to Algorithm 8.9 when graphics/538fig05.gif in (8.103) is below a threshold.

The dominant computation of the delayed-sample method above at each time t involves the ( m A D ) one-step Kalman filter updates, which ”as before ”can be carried out in parallel. Finally, we note that we can use the delayed-sample method in conjunction with the delayed-weight method. For example, using the delayed-sample method, we generate delayed samples and weights graphics/538fig06.gif based on the signals Y t + D . Then with an additional delay d , we can use the following delayed-weight method to estimate the symbol a posteriori probability:

Equation 9.121

graphics/09equ121.gif


Simulation Examples

The fading process is modeled as the output of a Butterworth filter of order r = 3 driven by a complex white Gaussian noise process. The cutoff frequency of this filter is 0.05, corresponding to a normalized Doppler frequency (with respect to the symbol rate 1/ T ) f d T = 0.05, which is a fast-fading scenario. Specifically, the sequence of fading coefficients { a t } is modeled by the following ARMA(3,3) process:

Equation 9.122

graphics/09equ122.gif


where u t ~ N c (0, 1). The filter coefficients in (9.122) are chosen such that Var{ a t } = 1. It is assumed that BPSK modulation is employed (i.e., the transmitted symbols s t {+1, “1}).

To demonstrate the high performance of the Monte Carlo adaptive receiver, in the following simulation examples we compare the performance (in terms of bit-error rate) of the Monte Carlo receivers with that of the following three receiver schemes:

  • Known channel lower bound: In this case we assume that the fading coefficients { a t } are known to the receiver. Then by (9.81), the optimal coherent detection rule is given by graphics/539fig01.gif for both the Gaussian noise case (9.82) and the impulsive noise case (9.83).

  • Genie -aided lower bound: In this case we assume that a genie provides the receiver with an observation of the modulation-free channel coefficient corrupted by additive noise with the same variance [i.e., graphics/539fig02.gif , where graphics/539fig03.gif for the Gaussian noise case and graphics/539fig04.gif for the impulsive noise case]. In the case of impulsive noise, the genie also provides the receiver with the noise indicator I t . The receiver then uses a Kalman filter to track the fading process based on the information provided by the genie (i.e., it computes graphics/539fig05.gif . The transmitted symbols are then demodulated according to graphics/539fig01.gif . It is clear that such a genie-aided bound is lower bounded by the known channel bound. It should also be noted that the genie is used only for calculating the lower bound. The SMC algorithms estimate the channel and the symbols simultaneously with no help from the genie.

  • Differential detector: In this case no attempt is made to estimate the fading channel. Instead, the receiver detects the phase difference in two consecutively transmitted bits by using the simple rule of differential detection: graphics/539fig06.gif .

We consider the performance of the sequential Monte Carlo receiver in a fading Gaussian noise channel without coding. In this case differential encoding and decoding are employed to resolve the phase ambiguity. The adaptive receiver implements Algorithm 9.5, described in Section 9.5.2. The number of Monte Carlo samples drawn at each time was empirically set as m = 50. Simulation results have shown that the performance did not improve much when m was increased to 100, whereas it degraded notably when m was reduced to 20. Algorithm 8.9 for resampling was employed to maintain the efficiency of the algorithm, in which the effective sample size threshold is graphics/540fig01.gif . The delayed-weight method discussed in Section 9.5.3 was used to extract further information from future received signals, which resulted in improved performance compared with concurrent estimation. In each simulation, the sequential Monte Carlo algorithm was run on 10,000 symbols (i.e., t = 1, ...,10,000). In counting the symbol detection errors, the first 50 symbols were discarded to allow the algorithm to reach steady state. In Fig. 9.14, the bit-error rate performance versus the signal-to-noise ratio (defined as Var{ a t }/Var{ n t }) corresponding to delay values d = 0 (concurrent estimate), d = 1, and d = 2 is plotted. In the figure we also plot the known channel lower bound, the genie-aided lower bound, and the BER curve of the differential detector. From this figure it is seen that for the uncoded case ”with only a small amount of delay ”the performance of the sequential Monte Carlo receiver can be improved significantly by the delayed-weight method compared with the concurrent estimate. Even with the concurrent estimate, the proposed adaptive receiver does not exhibit an error floor as does the differential detector. Moreover, with a delay d = 2, the proposed adaptive receiver essentially achieves the genie-aided lower bound. We have also implemented the delayed-sample method for this case and found that it offers little improvement over the delayed-weight method.

Figure 9.14. BER performance of a sequential Monte Carlo receiver in a fading channel with Gaussian noise and without coding. The delayed-weight method is used. The BER curves corresponding to delays d = 0, d = 1, and d = 2 are shown. Also shown are BER curves for the known channel lower bound, genie-aided lower bound, and differential detector.

graphics/09fig14.gif

9.5.4 Adaptive Receiver in Fading Gaussian Noise Channels: Coded Case

So far we have considered the problem of detecting uncoded independent symbols in flat-fading channels. In what follows we extend the adaptive receiver technique presented in Section 9.5.2 and address the problem of sequential decoding of information bits in a convolutionally coded system signaling through a flat-fading channel.

Consider a binary rate k / n convolutional encoder of overall constraint length k n . Suppose that the encoder starts with an all-zero state at time t = 0. The input to the encoder at time t is a block of information bits d t = ( d t ,1 , ..., d t,k ); the encoder output at time t is a block of code bits b t = ( b t ,1 , ..., b t,n ). For simplicity here we assume that BPSK modulation is employed. Then the transmitted symbols at time t are s t = ( s t ,1 , ..., s t,n ), where s t,l = 2 b t,l - 1, l = 1, ..., n . (That is, s t,l = 1 if b t,l = 1, and s t,l = -1 if b t,l = 0.) Since b t is determined by ( d t , d t-1 , ..., d t-v ), so is s t . Hence we can write

Equation 9.123

graphics/09equ123.gif


for some function y ( ·) which is determined by the structure of the encoder.

Let y t = ( y t ,1 , ..., y t,n ) be the received signals at time t and let a t = ( a t ,1 , ..., a t,n ) be the channel states corresponding to b t and d t . Recall that a t -1, n = h H x t -1, n . Denote also graphics/541fig01.gif , and graphics/541fig02.gif . The Monte Carlo samples recorded at time ( t -1) are graphics/541fig03.gif , where graphics/541fig04.gif contains the mean and covariance matrix of the state vector channel x t -1, n conditioned on graphics/541fig05.gif and Y t -1 . That is,

Equation 9.124

graphics/09equ124.gif


As before, given the information bit sequence graphics/541fig05.gif , the corresponding graphics/541fig06.gif is obtained by a Kalman filter. Our algorithm is as follows.

Algorithm 9.7: [SMC algorithm for adaptive decoding in flat-fading channels ”Gaussian noise]

  • Initialization: Each Kalman filter is initialized as graphics/541fig07.gif , with graphics/541fig08.gif and graphics/541fig09.gif , where S is the stationary covariance of x t . All importance weights are initialized as graphics/542fig01.gif 1, ..., m. The initial graphics/542fig02.gif are randomly generated from the set {0,1} k , j = 1, ..., m.

    At time t we implement the following steps to update each sample j, j = 1, ..., m .

  • Compute the n -step update of the Kalman filter: For each possible code vector d t = a i {0,1} k , compute the corresponding symbol vector s t using (9.123) to obtain

    Equation 9.125

    graphics/09equ125.gif


    Let graphics/542fig03.gif and graphics/542fig04.gif . Perform n steps of the Kalman filter update, using graphics/542fig05.gif and y t , as follows: For l = 1, ..., n , compute

    Equation 9.126

    graphics/09equ126.gif


    Equation 9.127

    graphics/09equ127.gif


    Equation 9.128

    graphics/09equ128.gif


    Equation 9.129

    graphics/09equ129.gif


    Equation 9.130

    graphics/09equ130.gif


    In (9.125)-(9.130) it is made explicit that the quantity on the left side of each equation is a function of the code-bit vector a i . This yields graphics/542fig06.gif and graphics/542fig07.gif for each a i {0,1} k .

  • Compute the trial sampling density: For each a i {0,1} k , compute

    Equation 9.131

    graphics/09equ131.gif


    Equation 9.132

    graphics/09equ132.gif


    Equation 9.133

    graphics/09equ133.gif


    where (9.132) follows from the fact that d t is independent of D t -1 and Y t -1 .

  • Impute the code bit vector d t : Draw graphics/543fig01.gif from the set {0,1} k with probability

    Equation 9.134

    graphics/09equ134.gif


    Append graphics/543fig01.gif to graphics/543fig02.gif and obtain graphics/543fig03.gif . Pick the updated Kalman filter values graphics/543fig04.gif and graphics/543fig05.gif from the results in the first step, according to the value of the sample graphics/543fig01.gif . This yields graphics/543fig06.gif .

  • Compute the importance weight:

    Equation 9.135

    graphics/09equ135.gif


    where (9.135) follows from (9.131).

  • Perform resampling according to Algorithm 8.9 when graphics/543fig07.gif in (8.103) is below a threshold.

Following the same line of proof as in Section 9.6.1, it can be shown that graphics/543fig08.gif drawn by the procedure above are properly weighted with respect to p ( D t Y t ) provided that the samples graphics/543fig09.gif are properly weighted with respect to p ( D t -1 Y t -1 ). Note that in the coded case, phase ambiguity is prevented by the code constraint (9.123), and differential encoding is not needed.

At each time t , the major computation involved in the adaptive decoding algorithm above is the ( m n 2 k ) one-step Kalman filter updates, which can be carried out by ( m 2 k ) processing units, each computing an n -step update. (Note that d t contains k bits of information.) Furthermore, if the delayed-sample method outlined in Section 9.5.3 is employed for delayed estimation, then for a delay of D time units, a total of ( m n 2 k ( D +1) ) one-step Kalman filter updates are needed at each time t , which can be distributed among ( m 2 k ( D +1) ) processing units, each computing an n -step update.

Simulation Examples

We next show the performance of the proposed sequential Monte Carlo receiver in a coded system. The information bits are encoded using a rate- graphics/601fig01.gif constraint-length-5 convolutional code (with generators 23 and 25 in octal notation). The receiver implements Algorithm 9.7 with a combination of delayed-sample and delayed-weight methods. That is, the information bit samples graphics/544fig01.gif are drawn by using the delayed-sample method with delay D , whereas the importance weights graphics/544fig02.gif are obtained after a further delay of d . The coded BER performance of this adaptive receiver with different delays ”together with that of the known channel lower bound, the genie-aided lower bound, and the differential detector ”is plotted in Fig. 9.15. It is seen that unlike the uncoded case, for coded systems the delayed-sample method is very effective in improving receiver performance. With a sample delay of D = 5 and weight delay d = 10, the receiver performance is close to the genie-aided lower bound.

Figure 9.15. BER performance of a sequential Monte Carlo receiver in a fading channel with Gaussian noise for a convolutionally coded system. The convolutional code has rate graphics/601fig01.gif and constraint length 5. A combination delayed-sample (with delay D ), delayed-weight (with delay d ) method is used. The BER curves corresponding to delays D = 1, D = 3, and D = 5 are shown. Also shown are BER curves for the known channel lower bound, genie-aided lower bound, and differential detector.

graphics/09fig15.jpg

9.5.5 Adaptive Receivers in Fading Impulsive Noise Channels

As noted in Chapter 4, the ambient noise in many mobile communication channels is impulsive, due to various natural and human-made impulsive sources. In [546], a technique is developed for signal detection in fading channels with impulsive noise based on the Masreliez nonlinear filter [309] (cf. Section 7.3.1), making use of pilot symbols and decision feedback. In this section we discuss an adaptive receiver for flat-fading channels with impulsive ambient noise using the sequential Monte Carlo technique.

As in the case of Gaussian noise fading channels, we first develop adaptive receivers for uncoded systems. Consider the state-space system given by (9.93) “(9.94). Note that given both the symbol sequence graphics/544fig03.gif and the noise indicator sequence graphics/544fig04.gif , this system is linear and Gaussian. Hence,

Equation 9.136

graphics/09equ136.gif


where the mean m t ( S t , I t ) and the covariance matrix S t ( S t , I t ) can be obtained by a Kalman filter with given S t and I t . As before, we seek to obtain properly weighted samples graphics/544fig05.gif with respect to the distribution p ( S t , I t Y t ). These samples are then used to estimate the transmitted symbols and channel parameters.

Algorithm 9.8: [SMC algorithm for adaptive detection in flat-fading channels ”impulsive noise]

  • Initialization: This step is the same as that in the Gaussian case. Note that no initial values for graphics/544fig06.gif are needed due to independence.

    At time t, the following updates are implemented for each sample j, j = 1, ..., m.

  • Compute the one-step predictive update of the Kalman filter graphics/546fig01.gif :

    Equation 9.137

    graphics/09equ137.gif


    Equation 9.138

    graphics/09equ138.gif


    Equation 9.139

    graphics/09equ139.gif


    Conditioned on graphics/546fig02.gif and graphics/546fig03.gif , the predictive distribution is then given by

    Equation 9.140

    graphics/09equ140.gif


  • Compute the trial sampling density: For each ( a , d ) i A x {1,2}, compute

    Equation 9.141

    graphics/09equ141.gif


  • Impute the symbol and the noise indicator ( s t , I t ): Draw graphics/546fig04.gif from the set A x {1,2} with probability

    Equation 9.142

    graphics/09equ142.gif


    Append graphics/546fig05.gif to graphics/546fig06.gif and obtain graphics/546fig05.gif .

  • Compute the importance weight:

    Equation 9.143

    graphics/09equ143.gif


    where (9.143) follows from (9.141).

  • Compute the one-step filtering update of the Kalman filter: Based on the imputed symbol and indicator graphics/546fig05.gif and the observation y t , complete the Kalman filter update to obtain graphics/546fig07.gif according to (9.113) and (9.114) with graphics/546fig08.gif .

  • Perform resampling according to Algorithm 8.9 when the effective sample size graphics/547fig01.gif in (8.103) is below a threshold.

The proof that Algorithm 9.8 gives properly weighted samples is similar to that for the Gaussian fading channels in Section 9.6.1. The dominant computation involved in the algorithm at each time t includes m one-step Kalman filter updates. If the delayed-sample method is employed for delayed estimation with a delay of D time units, then at each time t , ( m (2 A ) D ) one-step Kalman filter updates are needed because A x {1,2} = 2 A , which can be implemented in parallel.

Moreover, we can also develop the adaptive receiver algorithm for coded systems in impulsive noise flat-fading channels, similar to the one discussed in Section 9.5.4. For a rate- k / n convolutional code, if the delayed-sample method is used with a delay of D time units, then at each time t a total of ( m n 2 ( k + n )( D +1) ) one-step Kalman filter updates are needed, which can be distributed among ( m 2 ( k + n ( D +1) ) processors, each computing one n -step update. (With a delay of D units, there are 2 k ( D +1) possible code vectors, and there are 2 n ( D +1) possible noise indicator vectors.)

Simulation Examples

The uncoded BER performance of the proposed adaptive receiver, together with that of the other three receiver schemes, in a fading channel with impulsive ambient noise is shown in Fig. 9.16. The noise distribution is given by the two-term Gaussian mixture model (9.83) with k = 100 and = 0.1. As mentioned earlier, in this case, for the genie-aided bound the genie not only provides the observation of the noise-corrupted modulation-free channel coefficients, but also the true noise indicator { I t }, to the channel estimator . It is seen from this figure that, again, the delayed-weight method offers significant improvement over the concurrent estimate, although in this case the BER curve for d = 2 is slightly off the genie-aided lower bound. Furthermore, the proposed adaptive receiver does not have the error floor exhibited by the simple differential detector.

Figure 9.16. BER performance of a sequential Monte Carlo receiver in a fading channel with impulsive noise and without coding. = 0.1, k = 100. The delayed-weight method is used. BER curves corresponding to delays d = 0, d = 1, and d = 2 are shown. Also shown are BER curves for the known channel lower bound, genie-aided lower bound, and differential detector.

graphics/09fig16.gif

In summary, in this section we have discussed adaptive receiver algorithms for both uncoded and coded systems, where the delayed-weight method, the delayed-sample method, and a combination of both are employed to improve estimation accuracy. The Monte Carlo receiver techniques can also handle impulsive ambient noise. The computational complexities of the various algorithms discussed in this chapter are summarized in Table 9.1. Note that although the delayed-sample SMC estimation method offers a significant performance gain over the simple SMC method, it has a higher computational complexity. In [542], a number of alternative delayed estimation methods based on SMC are developed which trade off performance and complexity. Finally, note that the adaptive SMC receivers developed in this section require knowledge of the second-order statistics of the fading process. In [165], a nonparametric SMC receiver was developed that is based on wavelet modeling of the fading process and does not require knowledge of channel statistics.

Table 9.1. Computational Complexities of the Proposed Sequential Monte Carlo Receiver Algorithms under Different Conditions in Terms of the Number of One-Step Kalman Filter Updates Needed at Each Time t . [a]
 

Uncoded System

Coded System

Complexity

Degree of Parallelism

Complexity

Degree of Parallelism

Gaussian

m A D

m A D

m n 2 k ( D +1)

m 2 k ( D +1)

Impulsive

m (2 A ) D

m (2 A ) D

mn 2 ( k + n )( D +1)

m 2 ( k + n )( D +1)

[a] The degree of parallelism refers to the maximum number of computing units that can be employed to implement the algorithm in parallel. It is assumed that the delayed-sample method is used with a delay of D time units. The number of samples drawn at each time is m . For uncoded system, the cardinality of the symbol alphabet is A . For coded system, a k / n -convolutional code is used. The impulsive noise is modeled by a two-term Gaussian mixture.



Wireless Communication Systems
Wireless Communication Systems: Advanced Techniques for Signal Reception (paperback)
ISBN: 0137020805
EAN: 2147483647
Year: 2003
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net