In this section we describe an adaptive receiver technique for signal reception and decoding in flat-fading channels based on a Bayesian formulation of the problem and the sequential Monte Carlo filtering technique outlined in Chapter 8. The techniques presented in this section were first developed in [76]. The basic idea is to treat the transmitted signals as missing data and to impute multiple samples of them sequentially based on the current observation. The importance weight for each of the imputed signal sequences is computed according to its relative ability in predicting the future observation. Then the imputed signal sequences, together with their importance weights, can be used to approximate the Bayesian estimates of the transmitted signals and the fading coefficients of the channel. The novel features of such an approach include the following:
9.5.1 System DescriptionWe consider a channel-coded communication system signaling through a flat-fading channel with additive ambient noise. The block diagram of such a system is shown in Fig. 9.12. The input binary information bits { d t } are encoded using a channel code, resulting in a code-bit stream { b t }. The code bits are passed to a symbol mapper, yielding complex data symbols { s t } which take values from a finite alphabet A = { a 1 , . . ., a A }. Each symbol is then transmitted through a flat-fading channel whose input “output relationship is given by Figure 9.12. Coded communication system signaling through a flat-fading channel.
Equation 9.81
where y t , a t , s t , and n t are the received signal, fading channel coefficient, transmitted symbol, and ambient additive noise at time t , respectively. The processes { a t }, { s t }, and { n t } are assumed to be mutually independent. Note that, although (9.81) is exactly the same flat-fading channel model as (9.19), here we use a different notation for the sake of notational clarity in describing the sequential Monte Carlo algorithms later. It is assumed that the additive noise { n t } is a sequence of i.i.d. zero-mean complex random variables . In this section we consider two types of noise distributions. In the first type, n t assumes a complex Gaussian distribution, Equation 9.82
whereas in the second type, n t follows a two- term mixture Gaussian distribution, Equation 9.83
where 0 < < 1 and . Here, as in Chapters 4 and 8, the term represents the nominal ambient noise and the term represents an impulsive component. The probability that impulses occur is . Note that the overall variance of the noise is . It is further assumed that the channel-fading process is Rayleigh; that is, the fading coefficients { a t } form a complex zero-mean Gaussian process. Moreover, to model the temporal behavior of the fading, we assume that it can be represented by the output of a lowpass Butterworth filter of order r driven by white Gaussian noise, Equation 9.84
where D is the back-shift operator ; Equation 9.85
Equation 9.86
and { u t } is a white complex Gaussian noise sequence with unit variance and independent real and complex components . The coefficients { f i } and { q i }, as well as the order r of the Butterworth filter, are chosen so that the transfer function of the filter matches the power spectral density of the fading process, which in turn is determined by the channel Doppler frequency. In this section we assume that the statistical properties of the fading process are known a priori . Consequently, the order and coefficients of the Butterworth filter in (9.84) are known. We next write system (9.81) and (9.84) in state-space form, which is instrumental in developing the adaptive Bayesian receiver. Define Equation 9.87
Denote . By (9.84) we then have Equation 9.88
where
Because of (9.87), the fading coefficient sequence { a t } can be written as Equation 9.89
If the additive noise in (9.81) is Gaussian [i.e., n t ~ N c (0, s 2 )], we have the following state-space model for the system defined by (9.81) and (9.84): Equation 9.90
Equation 9.91
where { n t } in (9.91) is a white complex Gaussian noise sequence with unit variance and independent real and imaginary components. On the other hand, if the additive noise in (9.81) is impulsive and is modeled by (9.83) as in Chapter 8, we introduce an indicator random variable I t , t = 0, 1, . . . : Equation 9.92
with P ( I t = 1) = and P ( I t = 2) = 1 “ . Because n t is an i.i.d. sequence, so is I t . We then have the state-space signal model for this case given by Equation 9.93
Equation 9.94
We now look at the problem of online estimation of the symbol s t and the channel coefficient a t based on the received signals up to time . Consider the simple case when the ambient channel noise is Gaussian and the symbols are independent and identically distributed uniformly a priori [i.e., p ( s i ) = 1/ A ]. Then the problem becomes one of making Bayesian inferences with respect to the posterior distribution Equation 9.95
For example, an online symbol estimation can be obtained from the marginal posterior distribution p ( s t y , . . ., y t ), and an online channel state estimation can be obtained from the marginal posterior distribution p ( x t y , . . ., y t ). Although the joint distribution (9.95) can be written out explicitly up to a normalizing constant, computation of the corresponding marginal distributions involves very high dimensional integration and is infeasible in practice. An effective approach to this problem is the sequential Monte Carlo filtering technique discussed in Chapter 8. 9.5.2 Adaptive Receiver in Fading Gaussian Noise Channels: Uncoded CaseMKF-Based Sequential Monte Carlo ReceiverConsider the flat-fading channel with additive Gaussian noise given by (9.90) and (9.91). Denote and . We first consider the case of an uncoded system, where the transmitted symbols are assumed to be independent: Equation 9.96
When no prior information about the symbols is available, the symbols are assumed to take each possible value in A with equal probability [i.e., P ( s t = a i ) = 1/ A for i = 1, . . ., A ]. We are interested in estimating the symbol s t and the channel coefficient a t = h H x t at time t based on the observation Y t . The Bayesian solution to this problem requires the posterior distribution Equation 9.97
Note that with a given S t , the state-space model (9.90)-(9.91) becomes a linear Gaussian system. Hence, Equation 9.98
where the mean m t ( S t ) and covariance matrix S t ( S t ) can be obtained by a Kalman filter with the given S t . To implement the MKF, we need to obtain a set of Monte Carlo samples of the transmitted symbols, , properly weighted with respect to the distribution p ( S t Y t ). Then for any integrable function h ( x t , s t ), we can approximate the quantity of interest E { h ( x t , s t ) Y t } as follows: Equation 9.99
Equation 9.100
Equation 9.101
with Equation 9.102
where (9.99) follows from (9.97); (9.100) follows from (9.98); and in (9.100), f ( ·; m , S ) denotes a complex Gaussian density function with mean m and covariance matrix S . In particular, the MMSE channel estimate is given by Equation 9.103
In other words, we can let , implying that in (9.100). Moreover, the a posteriori symbol probability can be estimated as Equation 9.104
where d (.) is an indicator function such that d ( s t = a i ) = 1 if s t = a i , and d ( s t = a i ) = 0 otherwise . This corresponds to having and d ( s t = a i ). Note that a hard decision on the symbol s t is obtained by Equation 9.105
When MPSK signals are transmitted [i.e., a i = exp ( j (2 p i / A )) for i = 0, . . ., A - 1, where , the estimated symbol t may have a phase ambiguity. For instance, for BPSK signals, s t {+1, -1}. It is easily seen from (9.81) that if both the symbol sequence { s t } and the channel value sequence { a t } are phase-shifted by p (resulting in {- s t } and {- a t }, respectively), no change is incurred in the observed signal { y t }. Alternatively, in the state-space model (9.90)-(9.91), a phase shift of p on both the symbol sequence { s t } and the state sequence { x t } yields the same model for the observations. Hence such a phase ambiguity necessitates the use of differential encoding and decoding. Hereafter, we let , and . By applying the MKF techniques outlined in Section 8.3 to the flat-fading channel system, we describe the following algorithm for generating properly weighted Monte Carlo samples . Algorithm 9.5: [SMC for adaptive detection in flat-fading channels ”Gaussian noise]
The correctness of Algorithm 9.5 is stated by the following result, whose proof is given in the Appendix (Section 9.6.1). Proposition 9.1: The samples drawn by Algorithm 9.5 are properly weighted with respect to p ( S t Y t ), provided that are proper at time ( t “ 1). Algorithm 9.5 is depicted in Fig. 9.13. It is seen that at any time t , the only quantities that need to be stored are . At each time t , the dominant computation in this receiver involves the m one-step Kalman filter updates. Since the m samplers operate independently and in parallel, such a sequential Monte Carlo receiver is well suited for massively parallel implementation using VLSI systolic array technology [241]. Figure 9.13. Adaptive Bayesian receiver for flat-fading Gaussian noise channels based on mixture Kalman filtering.
9.5.3 Delayed EstimationSince the fading process is highly correlated, the future received signals contain information about current data and channel state. A delayed estimate is usually more accurate than the concurrent estimate. This is true for any channel with memory and is especially prominent when the transmitted symbols are coded, in which case not only the channel states but also the data symbols are highly correlated. In delayed estimation, instead of making an inference on ( x t , s t ) instantaneously with the posterior distribution p ( x t , s t Y t ), we delay this inference to a later time ( t + D ), D 0, with the distribution p ( x t , s t Y t + D ). Here we discuss two methods for delayed estimation: the delayed-weight method and the delayed-sample method. Delayed-Weight MethodFrom Algorithm 9.5 we note by induction that if the set is properly weighted with respect to p ( S t Y t ), the set is properly weighted with respect to p ( S t + d Y t + d ), d > 0. Hence, if we focus our attention on S t at time ( t + d ) and let h ( x t , s t ) = d ( s t = a i ) as in (9.104), we obtain a delayed estimate of the symbol: Equation 9.115
Since the weights contain information about the signals ( y t+1 , ..., y t + d ), the estimate in (9.115) is usually more accurate. Note that such a delayed estimation method incurs no additional computation, but it requires extra memory for storing . As will be seen in the simulation examples, for uncoded systems this simple delayed-weight method is quite effective for improving the detection performance over the concurrent method. However, for coded systems, this method is not sufficient for exploiting the constraint structures of both the channel and the symbols, and we must resort to the delayed-sample method, which is described next. Delayed-Sample MethodAn alternative method is to generate both the delayed samples and the weights based on the signals Y t + D , hence making p ( S t Y t + D ) the target distribution at time ( t + D ). This procedure will provide better Monte Carlo samples since it uses future information ( y t+1 , ..., y t + D ) in generating the current sample of s t . But the algorithm is also more demanding both analytically and computationally because of the need to marginalize out s t+d for d = 1, ..., D . For each possible "future" symbol sequence at time t + D “ 1 [i.e., ( s t , s t +1 , ..., s t + D “1 ) A D (a total of A D possibilities)], we keep the value of a D -step Kalman filter , where
with . Denote
The following is the delayed-sample algorithm for adaptive detection in flat-fading channels with Gaussian noise. Algorithm 9.6: [Delayed-sample SMC algorithm for adaptive detection in flat-fading channels ”Gaussian noise]
The dominant computation of the delayed-sample method above at each time t involves the ( m A D ) one-step Kalman filter updates, which ”as before ”can be carried out in parallel. Finally, we note that we can use the delayed-sample method in conjunction with the delayed-weight method. For example, using the delayed-sample method, we generate delayed samples and weights based on the signals Y t + D . Then with an additional delay d , we can use the following delayed-weight method to estimate the symbol a posteriori probability: Equation 9.121
Simulation ExamplesThe fading process is modeled as the output of a Butterworth filter of order r = 3 driven by a complex white Gaussian noise process. The cutoff frequency of this filter is 0.05, corresponding to a normalized Doppler frequency (with respect to the symbol rate 1/ T ) f d T = 0.05, which is a fast-fading scenario. Specifically, the sequence of fading coefficients { a t } is modeled by the following ARMA(3,3) process: Equation 9.122
where u t ~ N c (0, 1). The filter coefficients in (9.122) are chosen such that Var{ a t } = 1. It is assumed that BPSK modulation is employed (i.e., the transmitted symbols s t {+1, “1}). To demonstrate the high performance of the Monte Carlo adaptive receiver, in the following simulation examples we compare the performance (in terms of bit-error rate) of the Monte Carlo receivers with that of the following three receiver schemes:
We consider the performance of the sequential Monte Carlo receiver in a fading Gaussian noise channel without coding. In this case differential encoding and decoding are employed to resolve the phase ambiguity. The adaptive receiver implements Algorithm 9.5, described in Section 9.5.2. The number of Monte Carlo samples drawn at each time was empirically set as m = 50. Simulation results have shown that the performance did not improve much when m was increased to 100, whereas it degraded notably when m was reduced to 20. Algorithm 8.9 for resampling was employed to maintain the efficiency of the algorithm, in which the effective sample size threshold is . The delayed-weight method discussed in Section 9.5.3 was used to extract further information from future received signals, which resulted in improved performance compared with concurrent estimation. In each simulation, the sequential Monte Carlo algorithm was run on 10,000 symbols (i.e., t = 1, ...,10,000). In counting the symbol detection errors, the first 50 symbols were discarded to allow the algorithm to reach steady state. In Fig. 9.14, the bit-error rate performance versus the signal-to-noise ratio (defined as Var{ a t }/Var{ n t }) corresponding to delay values d = 0 (concurrent estimate), d = 1, and d = 2 is plotted. In the figure we also plot the known channel lower bound, the genie-aided lower bound, and the BER curve of the differential detector. From this figure it is seen that for the uncoded case ”with only a small amount of delay ”the performance of the sequential Monte Carlo receiver can be improved significantly by the delayed-weight method compared with the concurrent estimate. Even with the concurrent estimate, the proposed adaptive receiver does not exhibit an error floor as does the differential detector. Moreover, with a delay d = 2, the proposed adaptive receiver essentially achieves the genie-aided lower bound. We have also implemented the delayed-sample method for this case and found that it offers little improvement over the delayed-weight method. Figure 9.14. BER performance of a sequential Monte Carlo receiver in a fading channel with Gaussian noise and without coding. The delayed-weight method is used. The BER curves corresponding to delays d = 0, d = 1, and d = 2 are shown. Also shown are BER curves for the known channel lower bound, genie-aided lower bound, and differential detector.
9.5.4 Adaptive Receiver in Fading Gaussian Noise Channels: Coded CaseSo far we have considered the problem of detecting uncoded independent symbols in flat-fading channels. In what follows we extend the adaptive receiver technique presented in Section 9.5.2 and address the problem of sequential decoding of information bits in a convolutionally coded system signaling through a flat-fading channel. Consider a binary rate k / n convolutional encoder of overall constraint length k n . Suppose that the encoder starts with an all-zero state at time t = 0. The input to the encoder at time t is a block of information bits d t = ( d t ,1 , ..., d t,k ); the encoder output at time t is a block of code bits b t = ( b t ,1 , ..., b t,n ). For simplicity here we assume that BPSK modulation is employed. Then the transmitted symbols at time t are s t = ( s t ,1 , ..., s t,n ), where s t,l = 2 b t,l - 1, l = 1, ..., n . (That is, s t,l = 1 if b t,l = 1, and s t,l = -1 if b t,l = 0.) Since b t is determined by ( d t , d t-1 , ..., d t-v ), so is s t . Hence we can write Equation 9.123
for some function y ( ·) which is determined by the structure of the encoder. Let y t = ( y t ,1 , ..., y t,n ) be the received signals at time t and let a t = ( a t ,1 , ..., a t,n ) be the channel states corresponding to b t and d t . Recall that a t -1, n = h H x t -1, n . Denote also , and . The Monte Carlo samples recorded at time ( t -1) are , where contains the mean and covariance matrix of the state vector channel x t -1, n conditioned on and Y t -1 . That is, Equation 9.124
As before, given the information bit sequence , the corresponding is obtained by a Kalman filter. Our algorithm is as follows. Algorithm 9.7: [SMC algorithm for adaptive decoding in flat-fading channels ”Gaussian noise]
Following the same line of proof as in Section 9.6.1, it can be shown that drawn by the procedure above are properly weighted with respect to p ( D t Y t ) provided that the samples are properly weighted with respect to p ( D t -1 Y t -1 ). Note that in the coded case, phase ambiguity is prevented by the code constraint (9.123), and differential encoding is not needed. At each time t , the major computation involved in the adaptive decoding algorithm above is the ( m n 2 k ) one-step Kalman filter updates, which can be carried out by ( m 2 k ) processing units, each computing an n -step update. (Note that d t contains k bits of information.) Furthermore, if the delayed-sample method outlined in Section 9.5.3 is employed for delayed estimation, then for a delay of D time units, a total of ( m n 2 k ( D +1) ) one-step Kalman filter updates are needed at each time t , which can be distributed among ( m 2 k ( D +1) ) processing units, each computing an n -step update. Simulation ExamplesWe next show the performance of the proposed sequential Monte Carlo receiver in a coded system. The information bits are encoded using a rate- constraint-length-5 convolutional code (with generators 23 and 25 in octal notation). The receiver implements Algorithm 9.7 with a combination of delayed-sample and delayed-weight methods. That is, the information bit samples are drawn by using the delayed-sample method with delay D , whereas the importance weights are obtained after a further delay of d . The coded BER performance of this adaptive receiver with different delays ”together with that of the known channel lower bound, the genie-aided lower bound, and the differential detector ”is plotted in Fig. 9.15. It is seen that unlike the uncoded case, for coded systems the delayed-sample method is very effective in improving receiver performance. With a sample delay of D = 5 and weight delay d = 10, the receiver performance is close to the genie-aided lower bound. Figure 9.15. BER performance of a sequential Monte Carlo receiver in a fading channel with Gaussian noise for a convolutionally coded system. The convolutional code has rate and constraint length 5. A combination delayed-sample (with delay D ), delayed-weight (with delay d ) method is used. The BER curves corresponding to delays D = 1, D = 3, and D = 5 are shown. Also shown are BER curves for the known channel lower bound, genie-aided lower bound, and differential detector.
9.5.5 Adaptive Receivers in Fading Impulsive Noise ChannelsAs noted in Chapter 4, the ambient noise in many mobile communication channels is impulsive, due to various natural and human-made impulsive sources. In [546], a technique is developed for signal detection in fading channels with impulsive noise based on the Masreliez nonlinear filter [309] (cf. Section 7.3.1), making use of pilot symbols and decision feedback. In this section we discuss an adaptive receiver for flat-fading channels with impulsive ambient noise using the sequential Monte Carlo technique. As in the case of Gaussian noise fading channels, we first develop adaptive receivers for uncoded systems. Consider the state-space system given by (9.93) “(9.94). Note that given both the symbol sequence and the noise indicator sequence , this system is linear and Gaussian. Hence, Equation 9.136
where the mean m t ( S t , I t ) and the covariance matrix S t ( S t , I t ) can be obtained by a Kalman filter with given S t and I t . As before, we seek to obtain properly weighted samples with respect to the distribution p ( S t , I t Y t ). These samples are then used to estimate the transmitted symbols and channel parameters. Algorithm 9.8: [SMC algorithm for adaptive detection in flat-fading channels ”impulsive noise]
The proof that Algorithm 9.8 gives properly weighted samples is similar to that for the Gaussian fading channels in Section 9.6.1. The dominant computation involved in the algorithm at each time t includes m one-step Kalman filter updates. If the delayed-sample method is employed for delayed estimation with a delay of D time units, then at each time t , ( m (2 A ) D ) one-step Kalman filter updates are needed because A x {1,2} = 2 A , which can be implemented in parallel. Moreover, we can also develop the adaptive receiver algorithm for coded systems in impulsive noise flat-fading channels, similar to the one discussed in Section 9.5.4. For a rate- k / n convolutional code, if the delayed-sample method is used with a delay of D time units, then at each time t a total of ( m n 2 ( k + n )( D +1) ) one-step Kalman filter updates are needed, which can be distributed among ( m 2 ( k + n ( D +1) ) processors, each computing one n -step update. (With a delay of D units, there are 2 k ( D +1) possible code vectors, and there are 2 n ( D +1) possible noise indicator vectors.) Simulation ExamplesThe uncoded BER performance of the proposed adaptive receiver, together with that of the other three receiver schemes, in a fading channel with impulsive ambient noise is shown in Fig. 9.16. The noise distribution is given by the two-term Gaussian mixture model (9.83) with k = 100 and = 0.1. As mentioned earlier, in this case, for the genie-aided bound the genie not only provides the observation of the noise-corrupted modulation-free channel coefficients, but also the true noise indicator { I t }, to the channel estimator . It is seen from this figure that, again, the delayed-weight method offers significant improvement over the concurrent estimate, although in this case the BER curve for d = 2 is slightly off the genie-aided lower bound. Furthermore, the proposed adaptive receiver does not have the error floor exhibited by the simple differential detector. Figure 9.16. BER performance of a sequential Monte Carlo receiver in a fading channel with impulsive noise and without coding. = 0.1, k = 100. The delayed-weight method is used. BER curves corresponding to delays d = 0, d = 1, and d = 2 are shown. Also shown are BER curves for the known channel lower bound, genie-aided lower bound, and differential detector.
In summary, in this section we have discussed adaptive receiver algorithms for both uncoded and coded systems, where the delayed-weight method, the delayed-sample method, and a combination of both are employed to improve estimation accuracy. The Monte Carlo receiver techniques can also handle impulsive ambient noise. The computational complexities of the various algorithms discussed in this chapter are summarized in Table 9.1. Note that although the delayed-sample SMC estimation method offers a significant performance gain over the simple SMC method, it has a higher computational complexity. In [542], a number of alternative delayed estimation methods based on SMC are developed which trade off performance and complexity. Finally, note that the adaptive SMC receivers developed in this section require knowledge of the second-order statistics of the fading process. In [165], a nonparametric SMC receiver was developed that is based on wavelet modeling of the fading process and does not require knowledge of channel statistics. Table 9.1. Computational Complexities of the Proposed Sequential Monte Carlo Receiver Algorithms under Different Conditions in Terms of the Number of One-Step Kalman Filter Updates Needed at Each Time t . [a]
|