5.2 Adaptive Array Processing in TDMA Systems


5.2.1 Signal Model

A wireless cellular communication system employing adaptive antenna arrays at the base station is shown in Fig. 5.1, where a base with P antenna elements receives signals from K users. The K users operate in the same bandwidth at the same time. One of the signals is destined to the base. The other signals are destined to other bases, and they interfere with the desired signal; that is, they constitute co-channel interference. Note that although here we consider the uplink scenario (mobile to base), where antenna arrays are most likely to be employed, the adaptive array techniques discussed in this section apply to the downlink (base to mobile) as well, provided that a mobile receiver is equipped with multiple antennas. The general structure can be applied to other systems as well.

Figure 5.1. Wireless communication system employing adaptive arrays at the base station. An array of P antenna elements at the base receives signals from K co-channel users, one of which is the desired user 's signal, and the rest are interfering signals.

graphics/05fig01.gif

The received signal at the antenna array is the superposition of K co-channel signals from the desired user and the interferers, plus the ambient channel noise. Assume that the signal bandwidth of the desired user and the interferers is smaller than the channel coherence bandwidth, so that the signals are subject to flat fading. Assume also that the fading is slow, such that the channel remains constant during one time slot containing M data symbol intervals. To focus on the spatial processing, we assume for the time being that all users employ the same modulation waveform, [1] so that after matched filtering with this waveform, the P -vector of received complex signal at the antenna array during the i th symbol interval within a time slot can be expressed as

[1] In Section 5.3, where we consider both spatial and temporal processing, we drop the assumption.

Equation 5.1

graphics/05equ001.gif


where b k [ i ] is the i th symbol transmitted by the k th user, g k = [ g 1, k · · · g P,k ] T is a complex vector (the steering vector) representing the response of the channel and array to the k th user's signal, and n [ i ] ~ N c ( , s 2 I P ) is a vector of complex Gaussian noise samples. It is assumed that all users employ phase-shift-keying (PSK) modulation with all symbol values being equiprobable. Thus, we have

graphics/228equ01.gif


The n th element of the steering vector g k can be expressed as

Equation 5.2

graphics/05equ002.gif


where A k is the transmitted complex amplitude of the k th user's signal, g n,k is the complex fading gain between the k th user's transmitter and the n th antenna at the receiver, and a n,k is the response of the n th antenna to the k th user's signal. It is also assumed that the data symbols of all users { b k [ i ]} are mutually independent and that they are independent of the ambient noise n [ i ]. The noise vectors { n [ i ]} are assumed to be i.i.d. with independent real and imaginary components . Note that, mathematically, the model (5.1) is identical to the synchronous CDMA model of (2.1). However, the different physical interpretation of the various quantities in (5.1) leads to somewhat different algorithms than those discussed previously. Nevertheless, this mathematical equivalence will be exploited in the sequel.

5.2.2 Linear MMSE Combining

Throughout this section we assume that user 1 is the desired user. In adaptive array processing, the received signal r [ i ] is combined linearly through a complex weight vector w graphics/cp.gif , to yield the array output signal z [ i ]:

graphics/228equ02.gif


In linear MMSE combining [570], the weight vector w is chosen such that the mean-square error between the transmitted symbol b 1 [ i ] and the array output z [ i ] is minimized:

Equation 5.3

graphics/05equ003.gif


where the expectation is taken with respect to the symbols of interfering users { b k [ i ] : k 1} and the ambient noise n [ i ].

In practice, the autocorrelation matrix C and the steering vector of the desired user g 1 are not known a priori to the receiver, and therefore they must be estimated in order to compute the optimal combining weight w in (5.3). In several TDMA-based wireless communication systems (e.g., GSM, IS-54 and IS-136), the information symbols in each slot are preceded by a preamble of known synchronization symbols, which can be used for training the optimal weight vector. The trained weight vector is then used for combining during the demodulation of the information symbols in the same slot.

Assume that in each time slot there are m t training symbols and M m t information symbols. Two popular methods for training the combining weights are the least-mean-squares (LMS) algorithm and the direct matrix inversion (DMI) algorithm [570]. The LMS training algorithm is as follows .

Algorithm 5.1: [LMS adaptive array]

  • Compute the combining weight:

    Equation 5.4

    graphics/05equ004.gif


    Equation 5.5

    graphics/05equ005.gif


    where m is a step-size parameter. Set graphics/229fig01.gif .

  • Perform data detection: Obtain graphics/229fig02.gif by quantizing graphics/229fig03.gif for i = m t , . . . , M “ 1.

Although the LMS algorithm has a very low computational complexity, it also has a slow convergence rate. Given that the number of training symbols in each time slot is usually small, it is unlikely that the LMS algorithm will converge to the optimum weight vector within the training period.

The DMI algorithm for training the optimum weight vector essentially forms the sample estimates of the autocorrelation matrix C and the steering vector g 1 using the signal received during the training period and the known training symbols, and then computes the combining weight vector according to (5.3) using these estimates. Specifically, it proceeds as follows.

Algorithm 5.2: [DMI adaptive array]

  • Compute the combining weight:

    Equation 5.6

    graphics/05equ006.gif


    Equation 5.7

    graphics/05equ007.gif


    Equation 5.8

    graphics/05equ008.gif


  • Perform data detection: Obtain graphics/229fig02.gif by quantizing graphics/229fig03.gif for i = m t , . . . , M “ 1.

It is easily seen that the sample estimates and graphics/gbar1.gif are unbiased [i.e., E { } = C and E { graphics/gbar1.gif } = g 1 ]. They are also strongly consistent; that is, they converge, respectively, to the true autocorrelation matrix C and the true steering vector g 1 almost surely as m t . Notice that both the LMS algorithm and the DMI algorithm compute the combining weights based only on the signal received during the training period. Since in practice the training period is short compared with the slot length (i.e., m t « M ), the weight vector graphics/230fig01.gif obtained by such an algorithm can be very noisy . In what follows, we consider a more powerful technique for computing the steering vector and the combining weights that exploits the received signal corresponding to the unknown ( M m t ) information symbols as well.

5.2.3 Subspace-Based Training Algorithm

Notice that the sample correlation matrix in (5.6) does not depend on the training symbols of the desired user { b 1 [ i ] : i = 0, . . . , m t “ 1}, and therefore we can use the received signals during the entire user time slot to get a better sample estimate of C :

Equation 5.9

graphics/05equ009.gif


However, the sample estimate graphics/gbar1.gif of the steering vector given by (5.7) does depend on the training symbols, and therefore this estimator cannot make use of the received signals corresponding to the unknown information symbols. In this section we present a more powerful subspace-based technique for computing the steering vector and the array combining weight vector. This method first appeared in [550].

Steering Vector Estimation

In what follows it is assumed that the number of antennas is greater than the number of interferers (i.e., P K ). A typical way to treat the case of P < K is to oversample the received signal to increase the dimensionality of the signal for processing [342]. For convenience and without loss of generality, we assume that the steering vectors { g k , k = 1, . . . , K } are linearly independent. The autocorrelation matrix C of the receive signal in (5.1) is given by

Equation 5.10

graphics/05equ010.gif


The eigendecomposition of C is given by

Equation 5.11

graphics/05equ011.gif


where as in previous chapters, U = [ U s U n ], L = diag { L s , s 2 I P “K }; L s = diag { l 1 , . . . , l K } contains the K largest eigenvalues of C in descending order, and U s = [ u 1 · · · u K ] contains the corresponding orthogonal eigenvectors; and U n = [ u K +1 · · · u P ] contains the ( N K ) orthogonal eigenvectors that correspond to the smallest eigenvalue s 2 . Denote graphics/231fig03.gif . It is easy to see that range ( G ) = range ( U s ). Thus the range space of U s is a signal subspace and its orthogonal complement, the noise subspace, is spanned by U n . Note that in contrast to the signal and noise subspaces discussed in preceding chapters, which are based on temporal structure, here the subspaces describe the spatial structure of the received signals. The following result is instrumental to developing the alternative steering vector estimator for the desired user.

Proposition 5.1: Given the eigendecomposition (5.11) of the autocorrelation matrix C , suppose that a received noise-free signal is given by

Equation 5.12

graphics/05equ012.gif


Then the k th user's transmitted symbol can be expressed as

Equation 5.13

graphics/05equ013.gif


Proof: Denote graphics/231fig01.gif and graphics/231fig02.gif . Then (5.12) can be written in matrix form as

Equation 5.14

graphics/05equ014.gif


Denote further

Equation 5.15

graphics/05equ015.gif


Then from (5.10) and (5.11) we have

Equation 5.16

graphics/05equ016.gif


Taking the Moore “Penrose generalized matrix inverse [189] on both sides of (5.16), we obtain

Equation 5.17

graphics/05equ017.gif


From (5.14) and (5.17) we then have

Equation 5.18

graphics/05equ018.gif


where the last equality follows from the fact that graphics/232fig01.gif . Note that (5.18) is the matrix form of (5.13).

Suppose now that the signal subspace parameters U s , L s , and s 2 are known. We next consider the problem of estimating the steering vector g 1 of the desired user, given m t training symbols { b 1 [ i ], i = 0, . . . , m t “ 1}, where m t K . The next result shows that in the absence of ambient noise, K linearly independent received signals suffice to determine the steering vector exactly.

Proposition 5.2: Let graphics/232fig02.gif be the vector of training symbols of the desired user, and graphics/232fig03.gif be the matrix of m t noise-free received signals during the training stage. Assume that rank ( Y ) = K. Then the steering vector of the desired user can be expressed as

Equation 5.19

graphics/05equ019.gif


where U s and L are defined in (5.11) and (5.15), respectively.

Proof: The i th received noise-free array output vector can be expressed as

Equation 5.20

graphics/05equ020.gif


Denote graphics/231fig02.gif . It then follows from (5.20) that

Equation 5.21

graphics/05equ021.gif


since graphics/233fig01.gif . On substituting (5.21) into (5.13), we obtain

Equation 5.22

graphics/05equ022.gif


Equation 5.23

graphics/05equ023.gif


Equation (5.23) can be written in matrix form as

Equation 5.24

graphics/05equ024.gif


Since rank ( Y ) = K and rank ( U s ) = K , we have rank ( Y H U s ) = K . Therefore, g 1 can be obtained uniquely from (5.24) by

graphics/233equ01.gif


where the last equality follows from the fact that graphics/233fig02.gif , since g 1 range ( U s ).

We can interpret the result above as follows. If the length of the data frame tends to infinity (i.e., M ), the sample estimate graphics/ctilde.gif in (5.9) converges to the true autocorrelation matrix C almost surely, and an eigendecomposition of the corresponding graphics/ctilde.gif will give the true signal subspace parameters U s and L . The result above then indicates that in the absence of background noise, a perfect estimate of the steering vector of the desired user g 1 can be obtained by using K linearly independent received signals and the corresponding training symbols for the desired user. The steering vector estimator graphics/gbar1.gif in the DMI method, given by (5.7), however, cannot achieve perfect steering vector estimation even in the absence of noise (i.e., s = 0), unless the number of training symbols tends to infinity (i.e., m t ). In fact, it is easily seen that the covariance matrix of that estimator is given by

graphics/233equ02.gif


In practice, the received signals are corrupted by ambient noise:

Equation 5.25

graphics/05equ025.gif


Since n [ i ] ~ N c ( , s 2 I P ), the log- likelihood function of the received signal r [ i ] conditioned on q [ i ], is given by

graphics/234equ01.gif


Hence the maximum-likelihood estimate of q [ i ] from r [ i ] is given by

Equation 5.26

graphics/05equ026.gif


where the last equality follows from the fact that graphics/234fig01.gif . Similar to (5.23), we can set up the following equations for estimating the steering vector g 1 from the noisy signal:

Equation 5.27

graphics/05equ027.gif


Denote graphics/234fig02.gif ; then (5.27) can be written in matrix form as

Equation 5.28

graphics/05equ028.gif


Solving g 1 from (5.28), we obtain

Equation 5.29

graphics/05equ029.gif


To implement an estimator of g 1 based on (5.29), we first compute the sample autocorrelation matrix graphics/ctilde.gif of the received signal according to (5.9). An eigendecomposition on graphics/ctilde.gif is then performed to get

Equation 5.30

graphics/05equ030.gif


The steering vector estimator for the desired user is then given by

Equation 5.31

graphics/05equ031.gif


Note that graphics/lamtildes.gif is used in (5.31) instead of graphics/lamtilde0.gif as in (5.29). The reason for this is to make this estimator strongly consistent. That is, if we let m t = M , we have graphics/235fig01.gif , graphics/235fig02.gif , and graphics/235fig03.gif . Hence from (5.31) we have

graphics/235equ01.gif


Interestingly, if on the other hand, we replace graphics/ubars.gif and graphics/lamtildes.gif in (5.31) by the corresponding sample estimates obtained from an eigendecomposition of in (5.6), then in the absence of noise, we obtain the same steering vector estimate graphics/gbar1.gif as in (5.7); while with noise, we obtain a less noisy estimate of g 1 than (5.7). Formally, we have the following result.

Proposition 5.3: Let the eigendecomposition of the sample autocorrelation matrix in (5.6) of the received training signals be

Equation 5.32

graphics/05equ032.gif


If we form the following estimator for the steering vector p 1 ,

Equation 5.33

graphics/05equ033.gif


then graphics/gcirctilde1.gif is related to graphics/gbar1.gif in (5.8) by

Equation 5.34

graphics/05equ034.gif


Proof: Using (5.33), we have

graphics/235equ02.gif


where in the third equality we have used (5.6) and (5.7), and where the fourth equality follows from (5.32). Therefore, in the absence of noise, graphics/235fig04.gif ; whereas with noise, graphics/gcirctilde1.gif is the projection of graphics/gbar1.gif onto the estimated signal subspace and therefore is a less noisy estimate of g 1 .

Weight Vector Calculation

The linear MMSE array combining weight vector in (5.3) can be expressed in terms of the signal subspace components, as stated by the following result.

Proposition 5.4: Let U s and L s be the signal subspace parameters defined in (5.11); then the linear MMSE combining weight vector for the desired user 1 is given by

Equation 5.35

graphics/05equ035.gif


Proof: The linear MMSE weight vector is given in (5.3). Substituting (5.11) into (5.3), we have

graphics/236equ05.gif


where the last equality follows from the fact that the steering vector is orthogonal to the noise subspace (i.e., graphics/236fig02.gif ).

By replacing U s , L s , and g 1 in (5.35) by the corresponding estimates [i.e., s and graphics/lamtildes.gif in (5.30) and graphics/gbar1.gif in (5.31)], we can compute the linear MMSE combining weight vector as follows:

Equation 5.36

graphics/05equ036.gif


Finally, we summarize the subspace-based adaptive array algorithm discussed in this section as follows.

Algorithm 5.3: [Subspace-based adaptive array for TDMA] Denote graphics/232fig02.gif as the training symbols and graphics/236fig01.gif as the corresponding received signals during the training period.

  • Compute the signal subspace:

    Equation 5.37

    graphics/05equ037.gif


    Equation 5.38

    graphics/05equ038.gif


  • Compute the combining weight vector:

    Equation 5.39

    graphics/05equ039.gif


  • Perform data detection: Obtain graphics/229fig02.gif by quantizing graphics/229fig03.gif for i = m t , . . . , M “ 1.

5.2.4 Extension to Dispersive Channels

So far we have assumed that the channels are nondispersive [i.e., there is no intersymbol interference (ISI)]. We next extend the techniques considered in previous subsections to dispersive channels and develop space-time processing techniques for suppressing both co-channel interference and intersymbol interference.

Let D be the delay spread of the channel (in units of symbol intervals). Then the received signal at the antenna array during the i th symbol interval can be expressed as

Equation 5.40

graphics/05equ040.gif


where g l,k is the array steering vector for the k th user's l th symbol delay and graphics/237fig02.gif . Denote graphics/242fig02.gif . By stacking m successive data samples, we define the following quantities:

graphics/237equ01.gif


Then from (5.40) we can write

Equation 5.41

graphics/05equ041.gif


where P is a matrix of the form

graphics/237equ02.gif


with

graphics/237equ03.gif


Here, as before, m is the smoothing factor and is chosen such that the matrix G is a "tall" matrix [i.e., Pm K ( m + D “ 1)]. Hence graphics/237fig01.gif . We assume that G has full column rank. From the signal model (5.41), it is evident that the techniques discussed in previous subsections can be applied straightforwardly to dispersive channels, with signal processing carried out on signal vectors of higher dimension. For example, the linear MMSE combining method for estimating the transmitted symbol b 1 [ i ] is based on quantizing the correlator output w H r [ i ], where w = C -1 g 1 , with

graphics/238equ01.gif


with

graphics/238equ02.gif


To apply the subspace-based adaptive array algorithm, we first estimate the signal subspace ( U s , L s ) of C by forming the sample autocorrelation matrix of r [ i ] and then performing an eigendecomposition. Notice that the rank of the signal subspace is K x ( m + D “ 1). Once the signal subspace is estimated, it is straightforward to apply the algorithms listed in Section 5.2.3 to estimate the data symbols.

Simulation Examples

In what follows we provide some simulation examples to demonstrate the performance of the subspace-based adaptive array algorithm discussed above. In the following simulations, it is assumed that an array of P = 10 antenna elements is employed at the base station. The number of symbols in each time slot is M = 162 with m t = 14 training symbols, as in IS-54/136 systems. The modulation scheme is binary PSK (BPSK). The channel is subject to Rayleigh fading, so that the steering vectors { g k , k = 1, . . . , K } are i.i.d. complex Gaussian vectors, graphics/238fig03.gif , where graphics/239fig03.gif is the received power of the k th user. The desired user is user 1. The interfering signal powers are assumed to be 6 dB below the desired signal power (i.e., A k = A 1 /2, for k = 2, . . . , K ). The ambient noise process { n [ i ]} is a sequence of i.i.d. complex Gaussian vectors, n [ i ] ~ N c ( , s 2 I P ).

In the first example we compare the performance of the two steering vector estimators graphics/gbar1.gif in (5.31) and graphics/gbar1.gif in (5.7). The number of users is six (i.e., K = 6) and the channels have no dispersion. For each SNR value, the normalized root-mean-square error (MSE) is computed for each estimator. For the subspace estimator, we consider its performance under both the exact signal subspace parameters ( U s , L s ) and the estimated signal subspace parameters ( graphics/utildes.gif , graphics/lamtildes.gif ). The results are plotted in Fig. 5.2. It is seen that the subspace-based steering vector estimator offers significant performance improvement over the conventional correlation estimator, especially in the high-SNR region. Notice that although both estimators tend to exhibit error floors at high SNR values, their causes are different. The floor of the sample correlation estimator is due to the finite length of the training preamble m t , whereas the floor of the subspace estimator is due to the finite length of the time slot M . It is also seen that the performance loss due to inexact signal subspace parameters is not significant in this case.

Figure 5.2. Comparison of normalized root MSEs of the subspace steering vector estimator and sample correlation steering vector estimator.

graphics/05fig02.gif

In the next example we compare the BER performance of the subspace training method and that of the DMI training method. The simulated system is the same as in the previous example. The BER curves of the three array combining methods, namely, the exact MMSE combining (5.3), the subspace algorithm, and the DMI method (5.6) “(5.8), are plotted in Fig. 5.3. It is evident from this figure that the subspace training method offers substantial performance gain over the DMI method.

Figure 5.3. BER performance of the subspace training algorithm, DMI algorithm, and exact MMSE algorithm in a nondispersive channel.

graphics/05fig03.gif

Finally, we illustrate the performance of the subspace-based spatial-temporal technique for jointly suppressing co-channel interference (CCI) and intersymbol interference (ISI). The simulated system is the same as above, except now the channel is dispersive with D = 1. It is assumed that graphics/239fig01.gif N c ( , graphics/239fig03.gif I P ) and graphics/239fig02.gif ( , graphics/239fig03.gif /4) I P ) for k = 1, . . . , K , where graphics/239fig03.gif is the received power of the k th user. As before, it is assumed that A k = A 1 /2 for k = 2, . . . , K . The smoothing factor is taken to be m = 2. In Fig. 5.4 the BER performance is plotted for the DMI algorithm, subspace algorithm, and exact linear MMSE algorithm. (Note here that for the DMI method, the number of training symbols must satisfy m t K m in order to get an invertible autocorrelation matrix C .) It is seen again that the subspace method achieves considerable performance gain over the DMI method.

Figure 5.4. BER performance of the subspace training algorithm, DMI algorithm, and exact MMSE algorithm in a dispersive channel.

graphics/05fig04.gif



Wireless Communication Systems
Wireless Communication Systems: Advanced Techniques for Signal Reception (paperback)
ISBN: 0137020805
EAN: 2147483647
Year: 2003
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net