8.2 Bayesian Signal Processing


8.2.1 Bayesian Framework

A typical statistical signal processing problem can be stated as follows : Given a set of observations graphics/448fig01.gif , we would like to make statistical inferences about some unknown quantities X = [ x 1 , x ,..., x n ]. Typically, the observations Y are functions of the desired unknown quantities X and some unknown "nuisance" parameters Q = [ q 1 , q 2 , ..., q l ]. To illustrate this, consider the following classical signal processing example of equalization.

Example 1: Equalization Suppose that we want to transmit binary symbols b [1], b [2], ..., b [i] {+1, -1}, through an intersymbol interference (ISI) channel whose input “output relationship is given by

Equation 8.1

graphics/08equ001.gif


where graphics/448fig10.gif represents the unknown complex channel response and { n [ i ]} contains i.i.d. Gaussian noise samples, with graphics/448fig03.gif . The inference problem of interest is to estimate the transmitted symbols X = { b [ i ]}, based on the received signal Y = { y [ i ]}. The nuisance parameters are Q = { g , ..., g L-1 , s 2 }.

In the Bayesian approach to statistical signal processing problems, all unknown quantities [i.e., ( X , Q )] are treated as random variables with some prior distribution described by a probability density p ( X , Q ). The Bayesian inference is made based on the joint posterior distribution of these unknowns, described by the density

Equation 8.2

graphics/08equ002.gif


Note that typically, the joint posterior distribution is completely known up to a normalizing constant. If we are interested in making an inference about the i th component x i of X [i.e., we wish to compute E { h ( x i ) Y } for some function h ( ·)], this quantity is given by

Equation 8.3

graphics/08equ003.gif


Equation 8.4

graphics/08equ004.gif


where graphics/449fig01.gif . These computations are usually not easy in practice.

For an introductory treatment of the Bayesian philosophy, including the selection of prior distributions, see the textbooks [51, 136, 248]. An account of criticism of the Bayesian approach to data analysis can be found in [33, 422], and a defense of the Bayesian choice can be found in [419].

8.2.2 Batch Processing versus Adaptive Processing

Depending on how the data are processed and on how the inference is made, most signal processing methods fall into one of two categories: batch processing and adaptive (i.e., sequential) processing. In batch signal processing , the entire data block Y is received and stored before it is processed, and the inference about X is made based on the entire data block Y . In adaptive processing , on the other hand, inference is made sequentially (i.e., online) as the data are being received. For example, at time t , after a new sample y t is received, an update on the inference about some or all elements of X is made. In this chapter we focus on optimal signal processing under the Bayesian framework for both batch and adaptive processing. We next illustrate batch and adaptive Bayesian signal processing, respectively, using the equalization example above.

Example 1: Batch Equalization Consider the equalization problem mentioned above. Let graphics/449fig02.gif be the received signal and graphics/449fig03.gif graphics/449fig04.gif be the transmitted symbols. Denote graphics/449fig05.gif . An optimal batch processing procedure for this problem is as follows. Assume that the unknown quantities g , s 2 , and X are independent of each other and have prior densities p ( g ), p ( s 2 ), and p ( X ), respectively. Since { n [ i ]} is a sequence of i.i.d. Gaussian samples, the joint posterior density of these unknown quantities ( g , s 2 , X ) based on the received signal Y takes the form

Equation 8.5

graphics/08equ005.gif


Equation 8.6

graphics/08equ006.gif


The a posteriori probabilities of the transmitted symbols can then be calculated from the joint posterior distribution (8.6) according to

Equation 8.7

graphics/08equ007.gif


Equation 8.8

graphics/08equ008.gif


Clearly, the direct computation in (8.8) involves 2 M -1 multidimensional integrals, which is certainly infeasible for most practical implementations in which M might be on the order of hundreds.

Example 2: Adaptive Equalization Again consider the equalization problem above. Define graphics/450fig01.gif and graphics/450fig02.gif for any i . We now look at the problem of online estimation of the symbol b [ i ] based on the received signals up to time graphics/450fig03.gif for some fixed delay graphics/450fig04.gif . This problem is one of making Bayesian inference with respect to the posterior density

Equation 8.9

graphics/08equ009.gif


An online symbol estimate can then be obtained from the marginal posterior distribution

Equation 8.10

graphics/08equ010.gif


Again we see that direct implementation of the optimal sequential Bayesian equalization above involves graphics/450fig05.gif multidimensional integrals at time i , which is increasing exponentially in time.

It is seen from the discussions above that although the optimal (i.e., Bayesian) signal processing procedures achieve the best performance (i.e., the Bayesian solutions achieve the minimum probability of error on symbol detection), they exhibit prohibitively high computational complexity and thus are not generally implementable in practice. The recently developed Monte Carlo methods for Bayesian computation have provided a viable approach to solving many such optimal signal processing problems with reasonable computational cost.

8.2.3 Monte Carlo Methods

In a typical Bayesian analysis, the computations involved in eliminating the missing parameters and other unknown quantities are so difficult that one has to resort to some numerical approaches to complete the required summations and integrations. Among all the numerical approaches, Monte Carlo methods are perhaps the most versatile, flexible, and powerful [275].

Suppose that we can generate random samples (either independent or dependent)

graphics/451equ01.gif

from the joint posterior distribution (8.2); or we can generate random samples

graphics/451equ02.gif

from the marginal posterior distribution p ( X Y ). Then we can approximate the marginal posterior p ( x i Y ) by the empirical distribution (i.e., the histogram) based on the corresponding component in the Monte Carlo sample (i.e., x (1) i , x (2) i ,... x ( n ) i ) and approximate the inference (8.4) by

Equation 8.11

graphics/08equ011.gif


As noted in Section 8.1, most Monte Carlo techniques fall into one of the following two categories: Markov chain Monte Carlo methods, corresponding to batch processing, and sequential Monte Carlo methods, corresponding to adaptive processing. These are discussed in the remainder of this chapter.



Wireless Communication Systems
Wireless Communication Systems: Advanced Techniques for Signal Reception (paperback)
ISBN: 0137020805
EAN: 2147483647
Year: 2003
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net