2.1 Introduction

   


This chapter provides summary of useful concepts and formulae for the transmission-systems basics that have been applied to DSL systems. A more comprehensive treatment for DSL occurs in a predecessor to this book [1], and the reader can refer to that earlier book for more details and derivations . The intent is the successful use of these formulae and principles in DSL-system simulations and performance-feasibility studies.

Section 2.2 reviews the transmission methods used in the earliest DSL systems that were primarily baseband systems (i.e., systems with no analog POTS simultaneously preserved on the same line as the DSL), whereas Section 2.3 provides the same for the later DSL methods of QAM/CAP and multicarrier methods, particularly DMT. More details on all are found in [1], and the DMT method in particular will appear also in Chapters 3, 7, and 11 of this book. Section 2.4 concludes by summarizing some simple impairment models: background noise and NEXT and FEXT coupling models.

All transmission channels are fundamentally analog and thus may exhibit a variety of transmission effects. In particular, telephone lines are analog, and so DSLs all use some form of modulation . The basic purpose of modulation is to convert a stream of bits into equivalent analog signals that are suitable for the transmission line.

Figure 2.1 depicts a digital transmission system. The transmitter converts each successive group of b bits from a digital bit stream into one of 2 b data symbols, x m , via a one-to-one mapping known as an encoder. Each group of b bits constitutes a message m , with M = 2 b possible values m = 0, ..., M - 1. The data symbols are N -dimensional (possibly complex) vectors, and the set of M vectors forms a signal constellation. Modulation is the process of converting each successive data symbol vector into a continuous-time analog signal { x m ( t )} m =0,..., M -1 that represents the message corresponding to each successive groups of b bits. The message may vary from use to use of the digital transmission system, and thus the message index m and the corresponding symbol x m are considered to be random, taking one of M possible values each time a message is transmitted. This chapter assumes that each message is equally likely to occur with probability 1 /M. The encoder may be sequential, in which case the mapping from messages to data symbols can vary with time as indexed by an encoder state, corresponding to n bits of past state information (function of previous input bit groups). There are 2 v possible states when the encoder is sequential. When n = 0, there is only one state and the encoder is memoryless.

Figure 2.1. Transmitter of a digital transmission system.

graphics/02fig01.gif

Linear modulation also uses a set of N , orthogonal unit-energy basis functions, { n ( t )} n =1 :N , which are independent of the transmitted message, m . The basis functions thus satisfy the orthonormality condition:

graphics/02equ01.gif


The n th basis function corresponds to the signal waveform component produced by the n th element of the symbol x m . [1] Different line codes are determined by the choice of basis functions and by the choice of signal constellation symbol vectors, x m , m = 0, , M -1. Figure 2.2 depicts the function of linear modulation: For each symbol period of T seconds, the modulator accepts the corresponding data symbol vector elements, x m 1 , ..., x mN , and multiplies each by its corresponding basis function, 1 (t) , , N (t), respectively, before summing all to form the modulated waveform x m ( t ). This waveform is then input to the channel.

[1] Complex basis functions occur only in the mathematically abstract case of baseband-equivalent channels, as in Section 2.3.5, and a superscript of * means complex conjugate (and also transpose when a vector).

Figure 2.2. Linear modulator.

graphics/02fig02.gif

The average energy, E x , of the transmitted signal can be computed as the average integrated squared value of x(t) over all the possible signals,

graphics/02equ02.gif


or more easily by finding the average squared length of the data symbol vectors,

graphics/02equ03.gif


The digital power of the transmitted signals is then S x = E x /T . The analog power, P x , is the digital power at the source driver output divided by the input impedance of the channel when the line and source impedances are real and matched. Generally, analog power is more difficult to calculate than digital power. Transmission analysts usually absorb the gain constants for a specific analog driver circuit into the definitions of the signal constellation points or symbol vector values x m and the normalization of the basis functions. The digital power is then exactly equal to the analog power, effectively allowing the line and analog effects to be viewed as a 1-ohm resistor.

The channel in Figure 2.3 consists of two potential distortion sources: bandlimited filtering of the transmitted signals through the filter with Fourier transform H(f) and additive Gaussian noise (unless specifically discussed otherwise ) with zero mean and power spectral density S n ( f ). The designer should analyze the transmission system with an appropriately altered graphics/02inl01.gif to include the effects of spectrally shaped noise, and then it is sufficient to investigate only the case of equivalent white noise where the noise power spectral density is a constant, s 2 .

Figure 2.3. Bandlimited channel with Gaussian noise.

graphics/02fig03.gif

2.1.1 The Additive White Gaussian Noise (AWGN) channel

The additive white Gaussian noise (AWGN) channel is the most heavily studied in digital transmission. This channel simply models the transmitted signal as being disturbed by some additive noise. It has H ( f ) = 1, which means there is no bandlimited filtering in the channel (clearly an idealization). If the channel is distortionless, then H(f) = 1 and s 2 = 0. On a distortionless channel, the receiver can recover the original data symbol by filtering the channel output y(t) = x(t) with a bank (set) of N parallel matched filters with impulse responses graphics/02inl02.gif and by sampling these filters' outputs at time t = T, as shown in Figure 2.4. This recovery of the data symbol vector is called demodulation. A bidirectional digital transmission apparatus that implements the functions of "modulation" and "demodulation" is often more succinctly called a modem . The reversal of the one-to-one encoder mapping on the demodulator output vector is called decoding. With nonzero channel noise, the demodulator output vector y is not necessarily equal to the modulator input x . The process of deciding which data symbol is closest to y is known as detection. When the noise is white Gaussian, the demodulator shown in Figure 2.4 is optimum. The optimum detector selects graphics/xcap.gif as the symbol vector value graphics/xmcap.gif closest to y in terms of the vector distance/length,

graphics/02equ04.gif


Figure 2.4. Demodulation, detection, and decoding.

graphics/02fig04.gif

Such a detector is known as a maximum likelihood detector, and the probability of an erroneous decision about x (and thus the corresponding group of b bits) is minimum. This type of detector is only optimum when the noise is white, Gaussian, and the channel has very little bandlimiting ( essentially infinite bandwidth) and is known as a symbol-by-symbol detector. Each matched filter output has independent noise samples (of the other matched-filter output samples), and all have mean-square noise sample value s 2 . Thus the SNR is

graphics/02equ05.gif


Detector implementation usually defines regions of values for y that map through the ML detector into specific input values. These regions are often called decision regions.

An error occurs when graphics/02inl03.gif that is, y is closer to a different symbol vector than to the correct symbol vector. An error is thus caused by noise being so large that y lies in a decision region for a point x j , j m that is not equal to the transmitted symbol. The probability of such an error on the AWGN channel is less than or equal to the probability that the noise is greater than half the distance between the closest two signal-constellation points. This minimum distance between two constellation points, d min , is easily computed as

graphics/02equ06.gif


Symbol vectors in a constellation will each have a certain number of nearest neighbors at, or exceeding , this minimum distance, N m . The average number of nearest neighbors is

graphics/02equ07.gif


which essentially counts the number of most likely ways that an error can occur. So, the probability of error is often accurately approximated by

graphics/02equ08.gif


where the Q-function is often used by DSL engineers . The quantity Q ( x ) is the probability that a unit-variance zero-mean Gaussian random variable exceeds the value in the argument, x,

graphics/02equ09.gif


The Q-function must be evaluated by numerical integration methods, but Figure 2.5 plots the value of the Q-function versus its argument (20 log(x)) in decibels. For the physicist at heart, graphics/02inl04.gif . For more error measures, see [1], but basically P e is of the same magnitude of order as all the others.

Figure 2.5. Q function versus SNR.

graphics/02fig05.gif

2.1.2 Margin, Gap, and Capacity

It is desirable to characterize a transmission method and an associated transmission channel simply. Margin, gap, and capacity are related concepts that allow such a simple characterization. Many commonly used line codes are characterized by a signal-to-noise ratio gap or just gap . The gap, G = G ( P e ,C), is a function of a chosen probability of symbol error, P e , and the line code, C. This gap measures efficiency of the transmission method with respect to best possible performance on an additive white Gaussian noise channel, and is often constant over a wide range of b (bits/symbol) that may be transmitted by the particular type of line code. Indeed, most line codes are quantified in terms of the achievable bit rate (at a given P e ) according to the following formula:

graphics/02equ10.gif


Thus, to compute data rate with a line code characterized by gap G , the designer need only know the gap and the SNR on the AWGN channel. An experienced DSL engineer usually knows the gaps for various line codes and can rapidly compute achievable data rates in their head. The SNR may be higher for better transmission modulation, but the gap is a function only of the encoder in Figure 2.1.

An optimum line code with a gap of G = 1 (0 dB) achieves a maximum data rate known as the channel capacity. Such an optimum code necessarily requires infinite complexity and infinite decoding/encoding delay. However, it has become practical at DSL speeds to design coding methods for which the gap is as low as 1 “2 dB. It is also possible when combining such codes with DMT to achieve capacity in DSL systems.

Often, transmission systems are designed conservatively to ensure that a prescribed probability of error occurs. The margin of a design at a given performance level is the amount of additional signal to noise ratio of the design in excess of the minimum required for a given code with gap G . The margin can be computed according to

graphics/02equ11.gif



   
Top


DSL Advances
DSL Advances
ISBN: 0130938106
EAN: 2147483647
Year: 2002
Pages: 154

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net