The Laplace transform is a mathematical method of solving linear differential equations that has proved very useful in the fields of engineering and physics. This transform technique, as it's used today, originated from the work of the brilliant English physicist Oliver Heaviside.[] The fundamental process of using the Laplace transform goes something like the following:
[] Heaviside (1850–1925), who was interested in electrical phenomena, developed an efficient algebraic process of solving differential equations. He initially took a lot of heat from his contemporaries because they thought his work was not sufficiently justified from a mathematical standpoint. However, the discovered correlation of Heaviside's methods with the rigorous mathematical treatment of the French mathematician Marquis Pierre Simon de Laplace's (1749–1827) operational calculus verified the validity of Heaviside's techniques.
Step 1. A timedomain differential equation is written that describes the input/output relationship of a physical system (and we want to find the output function that satisfies that equation with a given input).
Step 2. The differential equation is Laplace transformed, converting it to an algebraic equation.
Step 3. Standard algebraic techniques are used to determine the desired output function's equation in the Laplace domain.
Step 4. The desired Laplace output equation is, then, inverse Laplace transformed to yield the desired timedomain output function's equation.
This procedure, at first, seems cumbersome because it forces us to go the long way around, instead of just solving a differential equation directly. The justification for using the Laplace transform is that although solving differential equations by classical methods is a very powerful analysis technique for all but the most simple systems, it can be tedious and (for some of us) error prone. The reduced complexity of using algebra outweighs the extra effort needed to perform the required forward and inverse Laplace transformations. This is especially true now that tables of forward and inverse Laplace transforms exist for most of the commonly encountered time functions. Well known properties of the Laplace transform also allow practitioners to decompose complicated time functions into combinations of simpler functions and, then, use the tables. (Tables of Laplace transforms allow us to translate quickly back and forth between a time function and its Laplace transform—analogous to, say, a GermanEnglish dictionary if we were studying the German language.[]) Let's briefly look at a few of the more important characteristics of the Laplace transform that will prove useful as we make our way toward the discrete ztransform used to design and analyze IIR digital filters.
[] Although tables of commonly encountered Laplace transforms are included in almost every system analysis textbook, very comprehensive tables are also available[1–3].
The Laplace transform of a continuous timedomain function f(t), where f(t) is defined only for positive time (t > 0), is expressed mathematically as
F(s) is called "the Laplace transform of f(t)," and the variable s is the complex number
Equation 64
A more general expression for the Laplace transform, called the bilateral or twosided transform, uses negative infinity (–) as the lower limit of integration. However, for the systems that we'll be interested in, where system conditions for negative time (t < 0) are not needed in our analysis, the onesided Eq. (63) applies. Those systems, often referred to as causal systems, may have initial conditions at t = 0 that must be taken into account (velocity of a mass, charge on a capacitor, temperature of a body, etc.) but we don't need to know what the system was doing prior to t = 0.
In Eq. (64), s is a real number and w is frequency in radians/ second. Because e–st is dimensionless, the exponent term s must have the dimension of 1/time, or frequency. That's why the Laplace variable s is often called a complex frequency.
To put Eq. (63) into words, we can say that it requires us to multiply, point for point, the function f(t) by the complex function e–st for a given value of s. (We'll soon see that using the function e–st here is not accidental; e–st is used because it's the general form for the solution of linear differential equations.) After the pointforpoint multiplications, we find the area under the curve of the function f(t)e–st by summing all the products. That area, a complex number, represents the value of the Laplace transform for the particular value of s = s + jw chosen for the original multiplications. If we were to go through this process for all values of s, we'd have a full description of F(s) for every value of s.
I like to think of the Laplace transform as a continuous function, where the complex value of that function for a particular value of s is a correlation of f(t) and a damped complex e–st sinusoid whose frequency is w and whose damping factor is s. What do these complex sinusoids look like? Well, they are rotating phasors described by
Equation 65
From our knowledge of complex numbers, we know that e–jwt is a unitymagnitude phasor rotating clockwise around the origin of a complex plane at a frequency of w radians per second. The denominator of Eq. (65) is a real number whose value is one at time t = 0. As t increases, the denominator est gets larger (when s is positive), and the complex e–st phasor's magnitude gets smaller as the phasor rotates on the complex plane. The tip of that phasor traces out a curve spiraling in toward the origin of the complex plane. One way to visualize a complex sinusoid is to consider its real and imaginary parts individually. We do this by expressing the complex e–st sinusoid from Eq. (65) in rectangular form as
Equation 65'
Figure 64 shows the real parts (cosine) of several complex sinusoids with different frequencies and different damping factors. In Figure 64(a), the complex sinusoid's frequency is the arbitrary w', and the damping factor is the arbitrary s'. So the real part of F(s), at s = s' + jw', is equal to the correlation of f(t) and the wave in Figure 64(a). For different values of s, we'll correlate f(t) with different complex sinusoids as shown in Figure 64. (As we'll see, this correlation is very much like the correlation of f(t) with various sine and cosine waves when we were calculating the discrete Fourier transform.) Again, the real part of F(s), for a particular value of s, is the correlation of f(t) with a cosine wave of frequency w and a damping factor of s, and the imaginary part of F(s) is the correlation of f(t) with a sinewave of frequency w and a damping factor of s.
Figure 64. Real part (cosine) of various e–st functions, where s = s + jw, to be correlated with f(t).
Now, if we associate each of the different values of the complex s variable with a point on a complex plane, rightfully called the splane, we could plot the real part of the F(s) correlation as a surface above (or below) that splane and generate a second plot of the imaginary part of the F(s) correlation as a surface above (or below) the splane. We can't plot the full complex F(s) surface on paper because that would require four dimensions. That's because s is complex, requiring two dimensions, and F(s) is itself complex and also requires two dimensions. What we can do, however, is graph the magnitude F(s) as a function of s because this graph requires only three dimensions. Let's do that as we demonstrate this notion of an F(s) surface by illustrating the Laplace transform in a tangible way.
Say, for example, that we have the linear system shown in Figure 65. Also, let's assume that we can relate the x(t) input and the y(t) output of the linear time invariant physical system in Figure 65 with the following messy homogeneous constantcoefficient differential equation
Equation 66
Figure 65. System described by Eq. (66). The system's input and output are the continuous time functions x(t) and y(t) respectively.
We'll use the Laplace transform toward our goal of figuring out how the system will behave when various types of input functions are applied, i.e., what the y(t) output will be for any given x(t) input.
Let's slow down here and see exactly what Figure 65 and Eq. (66) are telling us. First, if the system is time invariant, then the an and bn coefficients in Eq. (66) are constant. They may be positive or negative, zero, real or complex, but they do not change with time. If the system is electrical, the coefficients might be related to capacitance, inductance, and resistance. If the system is mechanical with masses and springs, the coefficients could be related to mass, coefficient of damping, and coefficient of resilience. Then, again, if the system is thermal with masses and insulators, the coefficients would be related to thermal capacity and thermal conductance. To keep this discussion general, though, we don't really care what the coefficients actually represent.
OK, Eq. (66) also indicates that, ignoring the coefficients for the moment, the sum of the y(t) output plus derivatives of that output is equal to the sum of the x(t) input plus the derivative of that input. Our problem is to determine exactly what input and output functions satisfy the elaborate relationship in Eq. (66). (For the stout hearted, classical methods of solving differential equations could be used here, but the Laplace transform makes the problem much simpler for our purposes.) Thanks to Laplace, the complex exponential time function of est is the one we'll use. It has the beautiful property that it can be differentiated any number of times without destroying its original form. That is
Equation 67
If we let x(t) and y(t) be functions of est, x(est) and y(est), and use the properties shown in Eq. (67), Eq. (66) becomes
or
Equation 68
Although it's simpler than Eq. (66), we can further simplify the relationship in the last line in Eq. (68) by considering the ratio of y(est) over x(est) as the Laplace transfer function of our system in Figure 65. If we call that ratio of polynomials the transfer function H(s),
Equation 69
To indicate that the original x(t) and y(t) have the identical functional form of est, we can follow the standard Laplace notation of capital letters and show the transfer function as
Equation 610
where the output Y(s) is given by
Equation 611
Equation (611) leads us to redraw the original system diagram in a form that highlights the definition of the transfer function H(s) as shown in Figure 66.
Figure 66. Linear system described by Eqs. (610) and (611). The system's input is the Laplace function X(s), its output is the Laplace function Y(s), and the system transfer function is H(s).
The cautious reader may be wondering, "Is it really valid to use this Laplace analysis technique when it's strictly based on the system's x(t) input being some function of est, or x(est)?" The answer is that the Laplace analysis technique, based on the complex exponential x(est), is valid because all practical x(t) input functions can be represented with complex exponentials. For example,
With that said, if we know a system's transfer function H(s), we can take the Laplace transform of any x(t) input to determine X(s), multiply that X(s) by H(s) to get Y(s), and then inverse Laplace transform Y(s) to yield the timedomain expression for the output y(t). In practical situations, however, we usually don't go through all those analytical steps because it's the system's transfer function H(s) in which we're most interested. Being able to express H(s) mathematically or graph the surface H(s) as a function of s will tell us the two most important properties we need to know about the system under analysis: Is the system stable, and if so, what is its frequency response?
"But wait a minute," you say. "Equations (610) and (611) indicate that we have to know the Y(s) output before we can determine H(s)!" Not really. All we really need to know is the timedomain differential equation like that in Eq. (66). Next we take the Laplace transform of that differential equation and rearrange the terms to get the H(s) ratio in the form of Eq. (610). With practice, systems designers can look at a diagram (block, circuit, mechanical, whatever) of their system and promptly write the Laplace expression for H(s). Let's use the concept of the Laplace transfer function H(s) to determine the stability and frequency response of simple continuous systems.
6.2.1 Poles and Zeros on the sPlane and Stability
One of the most important characteristics of any system involves the concept of stability. We can think of a system as stable if, given any bounded input, the output will always be bounded. This sounds like an easy condition to achieve because most systems we encounter in our daily lives are indeed stable. Nevertheless, we have all experienced instability in a system containing feedback. Recall the annoying howl when a public address system's microphone is placed too close to the loudspeaker. A sensational example of an unstable system occurred in western Washington when the first Tacoma Narrows Bridge began oscillating on the afternoon of November 7th, 1940. Those oscillations, caused by 42 mph winds, grew in amplitude until the bridge destroyed itself. For IIR digital filters with their builtin feedback, instability would result in a filter output that's not at all representative of the filter input; that is, our filter output samples would not be a filtered version of the input; they'd be some strange oscillating or pseudorandom values. A situation we'd like to avoid if we can, right? Let's see how.
We can determine a continuous system's stability by examining several different examples of H(s) transfer functions associated with linear timeinvariant systems. Assume that we have a system whose Laplace transfer function is of the form of Eq. (610), the coefficients are all real, and the coefficients b1 and a2 are equal to zero. We'll call that Laplace transfer function H1(s), where
Equation 612
Notice that if s = –a0/a1, the denominator in Eq. (612) equals zero and H1(s) would have an infinite magnitude. This s = –a0/a1 point on the splane is called a pole, and that pole's location is shown by the "x" in Figure 67(a). Notice that the pole is located exactly on the negative portion of the real s axis. If the system described by H1 were at rest and we disturbed it with an impulse like x(t) input at time t = 0, its continuous timedomain y(t) output would be the damped exponential curve shown in Figure 67(b). We can see that H1(s) is stable because its y(t) output approaches zero as time passes. By the way, the distance of the pole from the s = 0 axis, a0/a1 for our H1(s), gives the decay rate of the y(t) impulse response. To illustrate why the term pole is appropriate, Figure 68(b) depicts the threedimensional surface of H1(s) above the splane. Look at Figure 68(b) carefully and see how we've reoriented the splane axis. This new axis orientation allows us to see how the H1(s) system's frequency magnitude response can be determined from its threedimensional splane surface. If we examine the H1(s) surface at s = 0, we get the bold curve in Figure 68(b). That bold curve, the intersection of the vertical s = 0 plane (the jw axis plane) and the H1(s) surface, gives us the frequency magnitude response H1(w) of the system—and that's one of the things we're after here. The bold H1(w) curve in Figure 68(b) is shown in a more conventional way in Figure 68(c). Figures 68(b) and 68(c) highlight the very important property that the Laplace transform is a more general case of the Fourier transform because if s = 0, then s = jw. In this case, the H1(s) curve for s = 0 above the splane becomes the H1(w) curve above the jw axis in Figure 68(c).
Figure 67. Descriptions of H1(s): (a) pole located at s = s + jw = –a0/a1 + j0 on the splane; (b) timedomain y(t) impulse response of the system.
Figure 68. Further depictions of H1(s): (a) pole located at s = –a0/a1 on the splane; (b) H1(s) surface; (c) curve showing the intersection of the H1(s) surface and the vertical s = 0 plane. This is the conventional depiction of the H1(w) frequency magnitude response.
Another common system transfer function leads to an impulse response that oscillates. Let's think about an alternate system whose Laplace transfer function is of the form of Eq. (610), the coefficient b0 equals zero, and the coefficients lead to complex terms when the denominator polynomial is factored. We'll call this particular secondorder transfer function H2(s), where
Equation 613
(By the way, when a transfer function has the Laplace variable s in both the numerator and denominator, the order of the overall function is defined by the largest exponential order of s in the denominator polynomial. So our H2(s) is a secondorder transfer function.) To keep the following equations from becoming too messy, let's factor its denominator and rewrite Eq. (613) as
Equation 614
where A = b1/a2, p = preal + jpimag, and p* = preal – jpimag (complex conjugate of p). Notice that, if s is equal to –p or –p*, one of the polynomial roots in the denominator of Eq. (614) will equal zero, and H2(s) will have an infinite magnitude. Those two complex poles, shown in Figure 69(a), are located off the negative portion of the real s axis. If the H2 system were at rest and we disturbed it with an impulselike x(t) input at time t = 0, its continuous timedomain y(t) output would be the damped sinusoidal curve shown in Figure 69(b). We see that H2(s) is stable because its oscillating y(t) output, like a plucked guitar string, approaches zero as time increases. Again, the distance of the poles from the s = 0 axis (–preal) gives the decay rate of the sinusoidal y(t) impulse response. Likewise, the distance of the poles from the jw = 0 axis (±pimag) gives the frequency of the sinusoidal y(t) impulse response. Notice something new in Figure 69(a). When s = 0, the numerator of Eq. (614) is zero, making the transfer function H2(s) equal to zero. Any value of s where H2(s) = 0 is sometimes of interest and is usually plotted on the splane as the little circle, called a "zero," shown in Figure 69(a). At this point we're not very interested in knowing exactly what p and p* are in terms of the coefficients in the denominator of Eq. (613). However, an energetic reader could determine the values of p and p* in terms of a0, a1, and a2 by using the following wellknown quadratic factorization formula: Given the secondorder polynomial f(s) = as2 + bs + c, then f(s) can be factored as
Equation 615
Figure 69. Descriptions of H2(s): (a) poles located at s = preal ± jpimag on the splane; (b) time domain y(t) impulse response of the system.
Figure 610(b) illustrates the H2(s) surface above the splane. Again, the bold H2(w) curve in Figure 610(b) is shown in the conventional way in Figure 610(c) to indicate the frequency magnitude response of the system described by Eq. (613). Although the threedimensional surfaces in Figures 68(b) and 610(b) are informative, they're also unwieldy and unnecessary. We can determine a system's stability merely by looking at the locations of the poles on the twodimensional splane.
Figure 610. Further depictions of H2(s): (a) poles and zero locations on the s–plane; (b) H2(s) surface; (c) H2(w) frequency magnitude response curve.
To further illustrate the concept of system stability, Figure 611 shows the splane pole locations of several example Laplace transfer functions and their corresponding timedomain impulse responses. We recognize Figures 611(a) and 611(b), from our previous discussion, as indicative of stable systems. When disturbed from their atrest condition they respond and, at some later time, return to that initial condition. The single pole location at s = 0 in Figure 611(c) is indicative of the 1/s transfer function of a single element of a linear system. In an electrical system, this 1/s transfer function could be a capacitor that was charged with an impulse of current, and there's no discharge path in the circuit. For a mechanical system, Figure 611(c) would describe a kind of spring that's compressed with an impulse of force and, for some reason, remains under compression. Notice, in Figure 611(d), that, if an H(s) transfer function has conjugate poles located exactly on the jw axis (s = 0), the system will go into oscillation when disturbed from its initial condition. This situation, called conditional stability, happens to describe the intentional transfer function of electronic oscillators. Instability is indicated in Figures 611(e) and 611(f). Here, the poles lie to the right of the jw axis. When disturbed from their initial atrest condition by an impulse input, their outputs grow without bound.[] See how the value of s, the real part of s, for the pole locations is the key here? When s < 0, the system is well behaved and stable; when s = 0, the system is conditionally stable; and when s > 0 the system is unstable. So we can say that, when s is located on the right half of the splane, the system is unstable. We show this characteristic of linear continuous systems in Figure 612. Keep in mind that realworld systems often have more than two poles, and a system is only as stable as its least stable pole. For a system to be stable, all of its transferfunction poles must lie on the left half of the splane.
[] Impulse response testing in a laboratory can be an important part of the system design process. The difficult part is generating a true impulselike input. If the system is electrical, for example, although somewhat difficult to implement, the input x(t) impulse would be a very short duration voltage or current pulse. If, however, the system were mechanical, a whack with a hammer would suffice as an x(t) impulse input. For digital systems, on the other hand, an impulse input is easy to generate; it's a single unityvalued sample preceded and followed by all zerovalued samples.
Figure 611. Various H(s) pole locations and their timedomain impulse responses: (a) single pole at s < 0; (b) conjugate poles at s < 0; (c) single pole located at s = 0; (d) conjugate poles located at s = 0; (e) single pole at s > 0; (f) conjugate poles at s > 0.
Figure 612. The Laplace s–plane showing the regions of stability and instability for pole locations for linear continuous systems.
To consolidate what we've learned so far: H(s) is determined by writing a linear system's timedomain differential equation and taking the Laplace transform of that equation to obtain a Laplace expression in terms of X(s), Y(s), s, and the system's coefficients. Next we rearrange the Laplace expression terms to get the H(s) ratio in the form of Eq. (610). (The really slick part is that we do not have to know what the timedomain x(t) input is to analyze a linear system!) We can get the expression for the continuous frequency response of a system just by substituting jw for s in the H(s) equation. To determine system stability, the denominator polynomial of H(s) is factored to find each of its roots. Each root is set equal to zero and solved for s to find the location of the system poles on the splane. Any pole located to the right of the jw axis on the splane will indicate an unstable system.
OK, returning to our original goal of understanding the ztransform, the process of analyzing IIR filter systems requires us to replace the Laplace transform with the ztransform and to replace the splane with a zplane. Let's introduce the ztransform, determine what this new zplane is, discuss the stability of IIR filters, and design and analyze a few simple IIR filters.
URL http://proquest.safaribooksonline.com/0131089897/ch06lev1sec2
Amazon  


