6.2 MAP Decoding Algorithm for Convolutional Codes


The input “output relationship of the MAP channel decoder is illustrated in Fig. 6.3. The MAP decoder takes as input the a priori LLRs (or equivalently, probability distributions) of the code (i.e., channel) bits. It delivers as outputs updates of the LLRs of the code bits as well as the LLRs of the information bits, based on the code constraints. In this section we outline a recursive algorithm for computing the LLRs of the code bits and the information bits, which is essentially a slight modification of the celebrated BCJR algorithm [25].

Figure 6.3. Input “output relationship of a MAP channel decoder.

graphics/06fig03.gif

Consider a binary rate- k / n convolutional encoder of overall constraint length k n The input to the encoder at time t is a block of k information bits,

graphics/308equ01.gif

and the corresponding output is a block of n code bits,

graphics/308equ02.gif

The state of the trellis at time t can be represented by a [ k ( n “ 1)]-tuple, as

graphics/308equ03.gif

The dynamics of a convolutional code is completely specified by its trellis representation, which describes the transitions between the states at time instants t and t + 1. A path segment in the trellis from t = a to t = b > a is determined by the states it traverses at each time a t b and is denoted by

graphics/308equ04.gif

Denote the input information bits that cause the state transition from S t “1 = s' to S t = s by d ( s' , s ) and the corresponding output code bits by b ( s' , s ). Assuming that a systematic code is used, the pair ( s', b ) uniquely determines the state transition ( s', s ). We next consider computing the probability P ( S t -1 = s' , S t = s ) based on the a priori probabilities of the code bits { P ( b t )} and the constraints imposed by the trellis structure of the code. Suppose that the encoder starts in state S = . An information bit stream { d t } graphics/309fig01.gif is the input to the encoder, followed by n blocks of all zero inputs, causing the encoder to end in state S t = , where t = T + n . Let b t denote the output of the channel encoder at time t . We use the notation

Equation 6.5

graphics/06equ005.gif


Then we have

Equation 6.6

graphics/06equ006.gif


where a t ( s ) denotes the total probability of all path segments starting from the origin of the trellis that terminate in state s at time t ; and where b t ( s ) denotes the total probability of all path segments terminating at the end of the trellis that originate from state s at time t . In (6.6) we have assumed that the interleaving is ideal and therefore the joint distribution of b t factors into the product of its marginals:

Equation 6.7

graphics/06equ007.gif


The quantities a t ( s ) and b t ( s ) in (6.6) can be computed through the following forward and backward recursions [25]:

Equation 6.8

graphics/06equ008.gif


Equation 6.9

graphics/06equ009.gif


with boundary conditions a ( ) = 1, a ( s ) = 0 for s ; and b t ( ) = 1, b t ( s ) = 0 for s . In (6.8) the summation is taken over all the states s' where the transition ( s', s ) is possible, and similarly for the summation in (6.9).

Let graphics/310fig01.gif be the set of state pairs ( s', s ) such that the j th bit of the code symbol b ( s', s ) is +1. Define graphics/310fig02.gif similarly. Using (6.6), the a posteriori LLR of the code bit graphics/310fig03.gif at the output of the channel decoder is given by

Equation 6.10

graphics/06equ010.gif


It is seen from (6.10) that the output of the MAP channel decoder is the sum of the prior information l 1 ( graphics/310fig03.gif ) of the code bit graphics/310fig03.gif and the extrinsic information l 2 ( graphics/310fig03.gif ). The extrinsic information is the information about the code bit graphics/310fig03.gif gleaned from the prior information about the other code bits based on the trellis structure of the code.

A direct implementation of the recursions (6.8) and (6.9) is numerically unstable, since both a t ( s ) and b t ( s ) drop toward zero exponentially. For sufficiently large t , the dynamic range of these quantities will exceed the range of any machine. To obtain a numerically stable algorithm, these quantities must be scaled as the computation proceeds. Let graphics/310fig10.gif denote the scaled version of a t ( s ). Initially, a 1 ( s ) is computed according to (6.8), and we set

Equation 6.11

graphics/06equ011.gif


Equation 6.12

graphics/06equ012.gif


with

Equation 6.13

graphics/06equ013.gif


For each t 2, we compute graphics/310fig10.gif according to

Equation 6.14

graphics/06equ014.gif


Equation 6.15

graphics/06equ015.gif


with

Equation 6.16

graphics/06equ016.gif


Now by a simple induction, we obtain

Equation 6.17

graphics/06equ017.gif


Thus we can write graphics/310fig10.gif as

Equation 6.18

graphics/06equ018.gif


That is, each a t ( s ) is effectively scaled by the sum over all states of a t ( s ).

Let graphics/311fig10.gif denote the scaled version of b t ( s ). Initially, b t “1 ( s ) is computed according to (6.9), and we set graphics/311fig11.gif . For each t < t “ 1, we compute graphics/311fig10.gif according to

Equation 6.19

graphics/06equ019.gif


Equation 6.20

graphics/06equ020.gif


Because the scale factor c t effectively restores the sum of a t ( s ) over all states to 1, and because the magnitudes of a t ( s ) and b t ( s ) are comparable, using the same scaling factor is an effective way to keep the computation within a reasonable range. Furthermore, by induction, we can write

Equation 6.21

graphics/06equ021.gif


Using the fact that

Equation 6.22

graphics/06equ022.gif


is a constant that is independent of t , we can then rewrite (6.10) in terms of the scaled variables as

Equation 6.23

graphics/06equ023.gif


We can also compute the a posteriori LLR of the information symbol bit. Let graphics/312fig01.gif be the set of state pairs ( s', s ) such that the j th bit of the information symbol d ( s', s ) is +1. Define graphics/312fig02.gif similarly. Then we have

Equation 6.24

graphics/06equ024.gif


Note that the LLRs of the information bits are computed only at the last iteration. The information bit graphics/312fig03.gif is then decoded according to

Equation 6.25

graphics/06equ025.gif


Finally, since the input to the MAP channel decoder is the LLR of the code bits, { l 1 ( graphics/313fig01.gif )}, as will be shown in the next section, the code bit distribution P [ graphics/313fig01.gif ( s', s )] can be expressed in terms of its LLR as [cf. (6.39)]

Equation 6.26

graphics/06equ026.gif


The following is a summary of the MAP decoding algorithm for convolutional codes.

Algorithm 6.1: [MAP decoding algorithm for convolutional codes]

  • Compute the code bit probabilities from the corresponding LLRs using (6.26).

  • Initialize the forward and backward recursions:

    graphics/313equ01.gif

  • Compute the forward recursion using (6.8), (6.11) “(6.16).

  • Compute the backward recursion using (6.9), (6.19) “(6.20).

  • Compute the LLRs of the code bits and the information bits using (6.23) and (6.24).



Wireless Communication Systems
Wireless Communication Systems: Advanced Techniques for Signal Reception (paperback)
ISBN: 0137020805
EAN: 2147483647
Year: 2003
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net