2.8 Appendix


2.8.1 Derivations for Section 2.3.3

Derivation of Equation (2.61)

Recall that the RLS algorithm for updating the blind linear MMSE algorithm is as follows :

Equation 2.247

graphics/02equ247.gif


Equation 2.248

graphics/02equ248.gif


Equation 2.249

graphics/02equ249.gif


Equation 2.250

graphics/02equ250.gif


Equation 2.251

graphics/02equ251.gif


We first derive an explicit recursive relationship between m 1 [ i ] and m 1 [ i -1]. Define

Equation 2.252

graphics/02equ252.gif


Premultiplying both sides of (2.249) by graphics/093fig01.gif , we get

Equation 2.253

graphics/02equ253.gif


From (2.253), we obtain

Equation 2.254

graphics/02equ254.gif


where

Equation 2.255

graphics/02equ255.gif


Substituting (2.249) and (2.254) into (2.250), we get

Equation 2.256

graphics/02equ256.gif


where

Equation 2.257

graphics/02equ257.gif


is the a priori least-squares estimate at time i . It is shown below that

Equation 2.258

graphics/02equ258.gif


Equation 2.259

graphics/02equ259.gif


Substituting (2.248) and (2.258) into (2.256), we get

Equation 2.260

graphics/02equ260.gif


Therefore, by (2.260) we have

Equation 2.261

graphics/02equ261.gif


where v [ i ] is defined in (2.56). Therefore, from (2.261) we get

Equation 2.262

graphics/02equ262.gif


Finally, we derive (2.258) and (2.259). Postmultipling both sides of (2.251) by r [ i ], we get

Equation 2.263

graphics/02equ263.gif


On the other hand, (2.247) can be rewritten as

Equation 2.264

graphics/02equ264.gif


Equation (2.258) is obtained by comparing (2.263) and (2.264). Multiplying both sides of (2.254) by graphics/093fig01.gif k [ i ], we get

Equation 2.265

graphics/02equ265.gif


Equation (2.255) can be rewritten as

Equation 2.266

graphics/02equ266.gif


Equation (2.259) is obtained comparing (2.265) and (2.266).

Derivation of Equations (2.62) “(2.69)

Suppose that an application of the rotation matrix Q [ i ] yields the following form:

Equation 2.267

graphics/02equ267.gif


Then because of the orthogonality property of Q [ i ] (i.e., graphics/095fig02.gif ), taking the outer products of each side of (2.267) with their respective Hermitians, we get the following identities:

Equation 2.268

graphics/02equ268.gif


Equation 2.269

graphics/02equ269.gif


Equation 2.270

graphics/02equ270.gif


Associating A 1 with the first N columns of the partitioned matrix on the left-hand side of (2.62), and B 1 with the first N columns of the partitioned matrix on the right-hand side of (2.62), then (2.268), (2.269), and (2.270) yield

Equation 2.271

graphics/02equ271.gif


Equation 2.272

graphics/02equ272.gif


Equation 2.273

graphics/02equ273.gif


Equation 2.274

graphics/02equ274.gif


Equation 2.275

graphics/02equ275.gif


Equation 2.276

graphics/02equ276.gif


A comparison of (2.271) “(2.273) with (2.54) “(2.56) shows that C [ i ], u [ i ], and v [ i ] in (2.62) are the correct updated quantities at time n . Moreover, (2.67) follows from (2.274) and (2.57), (2.68) follows from (2.275) and (2.59), and (2.69) follows from (2.276) and (2.262).

2.8.2 Proofs for Section 2.4.4

Proof of Lemma 2.3

Denote

graphics/095fig01.gif


Note that the eigendecomposition of H is given by

Equation 2.277

graphics/02equ277.gif


Then the Moore “Penrose generalized inverse [189] of matrix H is given by

Equation 2.278

graphics/02equ278.gif


On the other hand, the Moore “Penrose generalized inverse H of a matrix H is the unique matrix that satisfies [189] (a) HH and H H are symmetric; (b) HH H = H ; and (c) H HH = H . Next we show that G = H by verifying these three conditions. We first verify condition (a). Using (2.106), we have

Equation 2.279

graphics/02equ279.gif


where the second equality follows from the facts that W T W = I N and S T S T = V T V = VV T = I K . Since the N x N diagonal matrix S S = diag ( I K , ), it follows from (2.279) that HG is symmetric. Similarly, GH is also symmetric. Next we verify condition (b).

Equation 2.280

graphics/02equ280.gif


where in the second equality, the following facts are used: W T W = I N , S T S T = I K , and V T V = VV T = I K ; the third equality follows from the fact that S S S = S . Condition (c) can be similarly verified (i.e., GHC = G ). Therefore, we have

Equation 2.281

graphics/02equ281.gif


Now (2.107) follows immediately from (2.281) and the fact that U T U = UU T = I N .

2.8.3 Proofs for Section 2.5.2

Some Useful Lemmas

We first list some lemmas that will be used in proving the results in Section 2.5.2. A random matrix is said to be Gaussian distributed if the joint distribution of all its elements is Gaussian. First we have the following vector form of the central limit theorem.

Lemma 2.4: (Theorem 1.9.1B in [443]) Let { x i } be i.i.d. random vectors with mean m and covariance matrix S . Then

graphics/096fig02.gif


Next we establish that the sample autocorrelation matrix graphics/096fig01.gif given by (2.122) is asymptotically Gaussian distributed as the sample size M .

Lemma 2.5 Denote

Equation 2.282

graphics/02equ282.gif


Equation 2.283

graphics/02equ283.gif


Equation 2.284

graphics/02equ284.gif


Then graphics/097fig01.gif converges in probability toward a Gaussian matrix with mean and an N 2 x N 2 covariance matrix whose elements are specified by

Equation 2.285

graphics/02equ285.gif


Proof: Since graphics/097fig02.gif given by (2.285) has graphics/097fig03.gif , and it is a sum of i.i.d. terms ( r [ i ] r [ i ] T ), by Lemma 2.4, it is asymptotically Gaussian, with an N 2 x N 2 covariance matrix whose elements are given by the covariance of the zero-mean random matrix ( r [ i ] r [ i ] T ). To calculate this covariance, note that (for notational convenience, in what follows we drop the time index i )

Equation 2.286

graphics/02equ286.gif


We have

Equation 2.287

graphics/02equ287.gif


where the last equality follows from the fact that

Equation 2.288

graphics/02equ288.gif


Note that the last term of (2.285) is due to the nonnormality of the received signal r [ i ]. If the signal had been Gaussian, the result would have been the first two terms of (2.285) only (compare this result with Theorem 3.4.4 in [18]). Using a different modulation scheme (other than BPSK) will result in a different form for the last term in (2.285).

In what follows we make frequent use of the differential of a matrix function (cf. [421], Chap. 14). Consider a function graphics/098fig01.gif . Recall that the differential of f at a point x is a linear function graphics/098fig02.gif such that

Equation 2.289

graphics/02equ289.gif


If the differential exists, it is given by L f ( x ; x ) = T ( x ) x , where graphics/098fig03.gif . Let y = f ( x ) and consider its differential at x . Denote graphics/098fig04.gif . Hence for fixed x , D y is a function of D x ; and for fixed x , if x is random, so is D y . We have the following lemma regarding the asymptotic distribution of a function of a sequence of asymptotically Gaussian vectors.

Lemma 2.6: (Theorem 3.3A in [443]) Suppose that graphics/099fig01.gif is asymptotically Gaussian; that is ,

graphics/099fig05.gif


Let graphics/099fig02.gif be a function . Denote y ( M )= f [ x ( M )]. Suppose that f has a nonzero differential L f ( x ; x ) = T ( x ) x at x . Denote graphics/099fig04.gif and D y ( M ) = T ( x ) D x ( M ). Then

Equation 2.290

graphics/02equ290.gif


where

Equation 2.291

graphics/02equ291.gif


Equation 2.292

graphics/02equ292.gif


To calculate C y we can use either (2.291) or (2.292). When dealing with functions of matrices, however, it is usually easier to use (2.292). In what follows we make use of the following identities of matrix differentials:

Equation 2.293

graphics/02equ293.gif


Equation 2.294

graphics/02equ294.gif


Equation 2.295

graphics/02equ295.gif


Finally, we have the following lemma regarding the differentials of the eigencomponents of a symmetric matrix. It is a generalization of Theorem 13.5.1 in [18]. Its proof can be found in [197].

Lemma 2.7: Let the N x N symmetric matrix C have an eigendecomposition graphics/099fig06.gif , where the eigenvalues satisfy graphics/099fig07.gif . Let D C be a symmetric variation of C and denote graphics/099fig08.gif . Let T be a unitary transformation of C as

Equation 2.296

graphics/02equ296.gif


Denote the eigendecomposition of T as

Equation 2.297

graphics/02equ297.gif


(Note that if C = C , then W = I N and L = L .) The differential of L at L , and the differential of W at I N , as a function of graphics/099fig09.gif , are given, respectively, by

Equation 2.298

graphics/02equ298.gif


Equation 2.299

graphics/02equ299.gif


Proof of Theorem 2.1

DMI Blind Detector Consider the function graphics/100fig01.gif . The differential of graphics/100fig02.gif at C r is given by

Equation 2.300

graphics/02equ300.gif


where graphics/100fig03.gif . Then according to Lemma 2.6, graphics/100fig04.gif is asymptotically Gaussian as M , with zero mean and covariance matrix given by (2.292) [4]

[4] We do not need the limit here, since the covariance matrix of graphics/100fig05.gif is independent of M .

Equation 2.301

graphics/02equ301.gif


Now, by Lemma 2.5, we have

Equation 2.302

graphics/02equ302.gif


Writing (2.302) in a matrix form, we have

Equation 2.303

graphics/02equ303.gif


with

graphics/100equ01.gif


The eigendecomposition of C r is

Equation 2.304

graphics/02equ304.gif


Substituting (2.303) and (2.304) into (2.301), we get

graphics/101equ01.gif


where the last equality follows from the fact that graphics/101fig10.gif .

Subspace Blind Detector We will prove the following more general proposition, which will be used in later proofs. The part of Theorem 2.1 for the subspace blind detector follows with v = s 1 .

Proposition 2.6: Let graphics/101fig01.gif be the weight vector of a detector , graphics/101fig02.gif and let graphics/101fig03.gif be the weight vector of the corresponding estimated detector. Then

graphics/101equ02.gif


with

Equation 2.305

graphics/02equ305.gif


where

Equation 2.306

graphics/02equ306.gif


Equation 2.307

graphics/02equ307.gif


Proof: Consider the function graphics/101fig04.gif . By Lemma 2.6, graphics/101fig05.gif is asymptotically Gaussian as M , with zero mean and covariance matrix given by graphics/101fig06.gif where D w 1 is the differential of graphics/101fig07.gif at ( U s , L s ). Denote U = [ U s U n ]. Define

Equation 2.308

graphics/02equ308.gif


Since T is a unitary transformation of graphics/101fig08.gif , its eigenvalues are the same as those of graphics/101fig09.gif . Hence its eigendecomposition can be written as

Equation 2.309

graphics/02equ309.gif


where graphics/102fig01.gif are eigenvectors of T . From (2.308) and (2.309), we have

Equation 2.310

graphics/02equ310.gif


Thus we have

Equation 2.311

graphics/02equ311.gif


The differential in (2.311) at ( I N , L ) is given by

Equation 2.312

graphics/02equ312.gif


where E s is composed of the first K columns of I N . Using Lemma 2.7, after some manipulations, we have

Equation 2.313

graphics/02equ313.gif


with

Equation 2.314

graphics/02equ314.gif


where we have used the fact that D T is symmetric (i.e., [ D T ] i,j = [ D T ] j,i ). Denote

Equation 2.315

graphics/02equ315.gif


Then C y = U T C r U = L . Moreover, we have D T = D C y . Since E { D T } = , by Lemma 2.5 for 1 i,j N ,

Equation 2.316

graphics/02equ316.gif


Using (2.313) and (2.316), we have

Equation 2.317

graphics/02equ317.gif


where (2.317) follows from the fact that

Equation 2.318

graphics/02equ318.gif


since it is assumed that graphics/104fig01.gif a similar relationship holds for U T s a . Writing (2.317) in matrix form, we obtain

Equation 2.319

graphics/02equ319.gif


where

Equation 2.320

graphics/02equ320.gif


Equation 2.321

graphics/02equ321.gif


Equation 2.322

graphics/02equ322.gif


Finally, by (2.311), graphics/104fig02.gif . Substituting (2.319) into this expansion, we obtain (2.305).

Proof of Corollary 2.1

First we compute the term given by (2.120). Using (2.304) and (2.128) and the fact that graphics/104fig03.gif , we have

Equation 2.323

graphics/02equ323.gif


with

Equation 2.324

graphics/02equ324.gif


Equation 2.325

graphics/02equ325.gif


Equation 2.326

graphics/02equ326.gif


Equation 2.327

graphics/02equ327.gif


Hence we have

Equation 2.328

graphics/02equ328.gif


Next note that the linear MMSE detector can also be written in terms of R as [520]

Equation 2.329

graphics/02equ329.gif


Therefore, we have

Equation 2.330

graphics/02equ330.gif


Equation 2.331

graphics/02equ331.gif


By (2.130), for the DMI blind detector, we have graphics/105fig01.gif , and for the subspace blind detector,

Equation 2.332

graphics/02equ332.gif


where we have used the fact that the decorrelating detector can be written as [549]

Equation 2.333

graphics/02equ333.gif


Finally, substituting (2.328) “(2.332) into (2.119), we obtain (2.132).

SINR for Equicorrelated Signals

In this case, R is given by

Equation 2.334

graphics/02equ334.gif


where 1 is an all-1 K -vector. It is straightforward to verify the following eigenstructure of R :

Equation 2.335

graphics/02equ335.gif


with

Equation 2.336

graphics/02equ336.gif


Equation 2.337

graphics/02equ337.gif


Since A 2 = A 2 I K , we have

Equation 2.338

graphics/02equ338.gif


Similarly, we obtain

Equation 2.339

graphics/02equ339.gif


Equation 2.340

graphics/02equ340.gif


Substituting (2.338) “(2.340) into (2.132) “(2.135), and defining

Equation 2.341

graphics/02equ341.gif


Equation 2.342

graphics/02equ342.gif


Equation 2.343

graphics/02equ343.gif


Equation 2.344

graphics/02equ344.gif


we obtain expression (2.143) for the average output SINRs of the DMI blind detector and the subspace blind detector.



Wireless Communication Systems
Wireless Communication Systems: Advanced Techniques for Signal Reception (paperback)
ISBN: 0137020805
EAN: 2147483647
Year: 2003
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net