7.9 Appendix: Convergence of the RLS Linear MMSE Detector


7.9.1 Linear MMSE Detector and RLS Blind Adaptation Rule

Consider the following received signal model:

Equation 7.151

graphics/07equ151.gif


where A K , b K and s K denote, respectively, the received amplitude, data bit, and the spreading waveform of the K th user ; i denotes the NBI signal; and graphics/435equ01.gif is the Gaussian noise. Assume that user 1 is the user of interest, and for convenience we will use the following notations: graphics/435equ02.gif , and graphics/435equ03.gif . The weight vector of the linear MMSE detector is given by

Equation 7.152

graphics/07equ152.gif


where R r is the autocorrelation matrix of the received discrete signal r :

Equation 7.153

graphics/07equ153.gif


The output SINR is given by

Equation 7.154

graphics/07equ154.gif


where

Equation 7.155

graphics/07equ155.gif


The mean output energy associated with w , defined as the mean-square output value of w applied to r , is

Equation 7.156

graphics/07equ156.gif


where the last equality follows from (7.155) and the matrix inversion lemma. The mean-square error (MSE) at the output of w is

Equation 7.157

graphics/07equ157.gif


The exponentially windowed RLS algorithm selects the weight vector w [ i ] to minimize the sum of exponentially weighted output energies:

graphics/436equ01.gif


where 0 < l < 1 is a forgetting factor (1 - l << 1). The purpose of l is to ensure that the data in the distant past will be forgotten in order to provide tracking capability in nonstationary environments. The solution to this constrained optimization problem is given by

Equation 7.158

graphics/07equ158.gif


where

Equation 7.159

graphics/07equ159.gif


A recursive procedure for updating w [ i ] is as follows:

Equation 7.160

graphics/07equ160.gif


Equation 7.161

graphics/07equ161.gif


Equation 7.162

graphics/07equ162.gif


Equation 7.163

graphics/07equ163.gif


In what follows we provide a convergence analysis for the algorithm above. In this analysis, we make use of three approximations/assumptions: (a) For large i , R r [ i ] is approximated by its expected value [111, 301]; (b) the input data r [ i ] and the previous weight vector w [ i “1] are assumed to be independent [175]; (c) some fourth-order statistic can be approximated in terms of a second-order statistic [175].

7.9.2 Convergence of the Mean Weight Vector

We start by deriving an explicit recursive relationship between w [ i ] and w [ i “1]. Denote

Equation 7.164

graphics/07equ164.gif


Premultiplying both sides of (7.161) by s T , we have

Equation 7.165

graphics/07equ165.gif


From (7.165) we obtain

Equation 7.166

graphics/07equ166.gif


where

Equation 7.167

graphics/07equ167.gif


Substituting (7.161) and (7.166) into (7.162), we can write

Equation 7.168

graphics/07equ168.gif


where

Equation 7.169

graphics/07equ169.gif


is the a priori least-squares estimate at time i . It is shown below that

Equation 7.170

graphics/07equ170.gif


Equation 7.171

graphics/07equ171.gif


Substituting (7.161) and (7.170) into (7.168), we have

Equation 7.172

graphics/07equ172.gif


Premultiplying both sides of (7.172) by R r [ i ], we get

Equation 7.173

graphics/07equ173.gif


where we have used (7.159) and (7.169). Let q [ i ] be the weight error vector between the weight vector w [ i ] at time n and the optimal weight vector w :

Equation 7.174

graphics/07equ174.gif


Then from (7.173) we can deduce that

Equation 7.175

graphics/07equ175.gif


Therefore,

Equation 7.176

graphics/07equ176.gif


where

Equation 7.177

graphics/07equ177.gif


in which we have used (7.171) and (7.169).

It has been shown [111, 301] that for large i , the inverse autocorrelation estimate graphics/438equ01.gif behaves like a quasi-deterministic quantity when N (1 - l ) << 1. Therefore, for large i , we can replace graphics/438equ01.gif by its expected value, which is given by [7, 111, 301]

Equation 7.178

graphics/07equ178.gif


Using this approximation , we have

Equation 7.179

graphics/07equ179.gif


Therefore, for large i ,

Equation 7.180

graphics/07equ180.gif


where we have used (7.170) and (7.179). For large i , R r [ i ] and R r [ i “1] can be assumed almost equal, and thus approximately [111, 301]

Equation 7.181

graphics/07equ181.gif


Substituting (7.181) and (7.180) into (7.176), we then have

Equation 7.182

graphics/07equ182.gif


Equation (7.182) is a recursive equation that the weight error vector q [ i ] satisfies for large i .

In what follows we assume that the present input r [ i ] and the previous weight error q [ i “1] are independent. In this application of interference suppression, this assumption is satisfied when the interference signal consists of only MAI and white noise. If, in addition, there is NBI present, this assumption is not satisfied but is nevertheless assumed, as is the common practice in the analysis of adaptive algorithms [111, 175, 301]. Taking expectations on both sides of (7.182), we have

graphics/439equ01.gif


where we have used the facts that s T w = s T w [ i ] = 1, s T q [ i ] = s T w [ i ] “ s T w = 0 and

Equation 7.183

graphics/07equ183.gif


Therefore, the expected weight error vector always converges to zero, and this convergence is independent of the eigenvalue distribution.

Finally, we verify (7.170) and (7.171). Postmultiplying both sides of (7.163) by r [ i ], we have

Equation 7.184

graphics/07equ184.gif


On the other hand, (7.160) can be rewritten as

Equation 7.185

graphics/07equ185.gif


Equation (7.170) is obtained by comparing (7.184) and (7.185).

Multiplying both sides of (7.166) by s T k [ i ], we can write

Equation 7.186

graphics/07equ186.gif


and (7.167) can be rewritten as

Equation 7.187

graphics/07equ187.gif


Equation (7.171) is obtained comparing (7.186) and (7.187).

7.9.3 Weight Error Correlation Matrix

We proceed to derive a recursive relationship for the time evolution of the correlation matrix of the weight error vector q [ i ], which is the key to analysis of the convergence of the MSE. Let K [ i ] be the weight error correlation matrix at time n . Taking the expectation of the outer product of the weight error vector q [ i ], we get

Equation 7.188

graphics/07equ188.gif


We next compute the four expectations appearing on the right-hand side of (7.188).

First term

Equation 7.189

graphics/07equ189.gif


Equation 7.190

graphics/07equ190.gif


Equation 7.191

graphics/07equ191.gif


Equation 7.192

graphics/07equ192.gif


Equation 7.193

graphics/07equ193.gif


where in (7.189) we have used (7.183); in (7.193) we have used (7.152); in (7.190) and (7.192) we have used the fact that graphics/441equ01.gif and in (7.191) we have used the following fact, which is derived below:

Equation 7.194

graphics/07equ194.gif


Second term

Equation 7.195

graphics/07equ195.gif


where we have used (7.183) and the following fact, which is shown below:

Equation 7.196

graphics/07equ196.gif


Therefore, the second term is a transient term.

Third term

The third term is the transpose of the second term, and therefore it is also a transient term.

Fourth term

Equation 7.197

graphics/07equ197.gif


Equation 7.198

graphics/07equ198.gif


where in (7.198) we have used (7.152), and in (7.197) we have used the following fact, which is derived below:

Equation 7.199

graphics/07equ199.gif


where graphics/441equ02.gif is the mean output energy defined in (7.156).

Now combining these four terms in (7.188), we obtain (for large i )

Equation 7.200

graphics/07equ200.gif


Finally, we derive (7.194), (7.196), and (7.199).

Derivation of (7.194)

We use the notation [ ·] mn to denote the ( m, n )th entry of a matrix and [ ·] k to denote the k th entry of a vector. Then

Equation 7.201

graphics/07equ201.gif


Next we use the Gaussian moment factoring theorem to approximate the fourth-order moment introduced in (7.201). The Gaussian moment factoring theorem states that if z 1 , z 2 , z 3 , and z 4 , are four samples of a zero-mean, real Gaussian process, then [175]

Equation 7.202

graphics/07equ202.gif


Using this approximation, we proceed with (7.201):

Equation 7.203

graphics/07equ203.gif


Therefore,

graphics/443equ01.gif


where in the last equality we used (7.183) and the following fact:

Equation 7.204

graphics/07equ204.gif


Derivation of (7.196)

Similarly, we use the approximation by the Gaussian moment factoring formula and obtain

graphics/443equ02.gif


since E { q [ i ]} 0.

Derivation of (7.199)

Using the Gaussian moment factoring formula, we obtain

graphics/443equ03.gif


7.9.4 Convergence of MSE

Next we consider the convergence of the output MSE. Let graphics/443equ04.gif denote the mean output energy at time i and [ i ] denote the MSE at time i :

Equation 7.205

graphics/07equ205.gif


Equation 7.206

graphics/07equ206.gif


Since [ i ] and graphics/443equ04.gif differ only by a constant P , we can focus on the behavior of the mean output energy graphics/443equ04.gif :

Equation 7.207

graphics/07equ207.gif


Since E { q [ i } , as i , the last term in (7.207) is a transient term. Therefore, for large graphics/444equ01.gif , where graphics/444equ02.gif is the average excess MSE at time i . We are interested in the asymptotic behavior of the excess MSE. Premultiplying both sides of (7.200) by R r and then taking the trace on both sides, we obtain

Equation 7.208

graphics/07equ208.gif


Since l 2 + (1- l 2 ) < [ l + (1 - l )] 2 = 1, the term tr{ R r K [ i ]} converges. The steady-state excess mean-square error is then given by

Equation 7.209

graphics/07equ209.gif


Again we see that the convergence of the MSE and the steady-state misadjustment are independent of the eigenvalue distribution of the data autocorrelation matrix, in contrast to the situation for the LMS version of the blind adaptive algorithm [183].

7.9.5 Steady-State SINR

We now consider the steady-state output SINR of the RLS blind adaptive algorithm. At time i the mean output value is

Equation 7.210

graphics/07equ210.gif


The variance of the output at time i is

Equation 7.211

graphics/07equ211.gif


Let graphics/444equ03.gif . Substituting (7.209) and (7.156) into (7.207), we get

Equation 7.212

graphics/07equ212.gif


Therefore the steady-state SINR is given by

Equation 7.213

graphics/07equ213.gif


where SINR * is the optimum SINR value given in (7.154).

7.9.6 Comparison with Training-Based RLS Algorithm

We now compare the preceding results with the analogous results for the conventional RLS algorithms in which the data symbols b [ i ] are assumed to be known to the receiver. This condition can be achieved by using either a training sequence or decision feedback. In this case, the exponentially windowed RLS algorithm chooses w [ i ] to minimize the cost function

Equation 7.214

graphics/07equ214.gif


The RLS adaptation rule in this case is given by [175]

Equation 7.215

graphics/07equ215.gif


Equation 7.216

graphics/07equ216.gif


where e p [ i ] is the prediction error at time i and k [ i ] is the Kalman gain vector defined in (7.160). Using the results from [111], we conclude that the mean weight vector w [ i ] converges to w (i.e., E { w [ i ]} w , as i ), where w is the optimal linear MMSE solution:

Equation 7.217

graphics/07equ217.gif


The MSE graphics/445equ01.gif also converges, graphics/445equ02.gif , as i , where * is the mean-square error of the optimum filter w , given by

Equation 7.218

graphics/07equ218.gif


The steady-state excess mean-square error is given by [111]

Equation 7.219

graphics/07equ219.gif


where we have used the approximation that graphics/445equ03.gif , since 1 - l << 1 and N >> 1. Next we consider the steady-state output SINR of this adaptation rule in which the data symbols b [ i ] are known. At time i , the mean output value is

Equation 7.220

graphics/07equ220.gif


where the last equality follows from (7.156). The output MSE at time i is

Equation 7.221

graphics/07equ221.gif


Therefore,

Equation 7.222

graphics/07equ222.gif


Using (7.220) and (7.222), after some manipulation, we have

Equation 7.223

graphics/07equ223.gif


Therefore, the output SINR in the steady state is given by

Equation 7.224

graphics/07equ224.gif




Wireless Communication Systems
Wireless Communication Systems: Advanced Techniques for Signal Reception (paperback)
ISBN: 0137020805
EAN: 2147483647
Year: 2003
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net