Article first published in Electronic Design Magazine , January, 1997Have you noticed what's happening in the world of clock specifications? It used to be that the only things that mattered in a clock specification were the frequency and the duty cycle. Between these two specifications, vendors were really just setting limits on the minimum time high and the minimum time low each period. A few complications arose with the advent of DRAM technology, like the need for maximum bounds on the clock period, but that was about it. Recently the whole clock scenario has undergone massive change, due mostly to the widespread use of PLLbased clock recovery schemes used in serial data communications equipment, PLLbased clock multipliers, and PLLbased clock regenerators. The basic premise of a PLL is that it carefully adjusts its own clock, called the local oscillator, to bring it into precise alignment with some external signal, usually called the reference clock. The PLL concept was originally developed for use in radio and later adapted for use in serial data communications. In the serial data communications application, the reference clock is often embedded, sometimes in very subtle ways, in a stream of data bits. It is the job of the PLL in the clock recovery subsystem to align its local oscillator with the reference clock information embedded in the data stream. Once properly aligned, the local oscillator can be used to clock bits out of the data stream, sampling each data baud right in the center, at the point of maximum noise immunity. In a clock recovery application, any imperfections in the transmit clock used to construct the data stream may compromise the ability of the PLL to properly align its local oscillator. Improper alignment results in bit errors. The various possible imperfections in the transmit clock are sometimes classified as frequency offsets, wander, and jitter. The term frequency offset refers to any longterm deviation between the actual transmitted clock frequency and the ideal. For example, crystalcontrolled transmission systems can be expected to attain frequency offsets as low as a few hundred parts per million. This sort of specification is measured with a frequency counter, averaging all clock pulses over a period of perhaps many seconds. A PLLbased clock recovery subsystem is designed to accurately lock in to any reference signal within the permitted frequency offsets. The frequency offset specification often has more to do with whether a PLL will lock in than with the quality of clock recovery, once lockin has occurred. Clock wander refers to the tendency of a clock reference to exhibit shortterm frequency variations. A PLL is designed to track the shortterm wander, provided that it does not slew too fast or wander too far afield. The permitted amount of wander, the rate of which a signal may wander up and down across the permitted frequency rate, and the slew rate of the wander are often key components of a good wander specification. Jitter refers to the fastest variations in clock frequency ”variations too fast to expect a PLL to track. Because a PLL can't track jitter, it always directly affects the accuracy of the timing relation between the reference clock and the local oscillator. In a data communications application, excessive jitter causes bit errors. Jitter refers to variations in clock frequency too fast to expect a PLL to track. Okay, so clock purity is important in data communications applications, we all knew that; but what does clock purity have to do with plain old digital design? Plenty, as we will see, because the same PLLbased clock recovery technology is being widely used to generate multihundred megaHertz, very lowskew processor clocks in the latest generation of clockgenerator chips from AMCC, Chrontel, PLX, Quality Semiconductor, Triquint, and many others. These new clock generators are flexible, fast, and packed with features. Most incorporate three basic ideas: a reference clock, a PLL clock multiplication circuit, and a means of maintaining very low skew among multiple clock outputs. In a typical clock multiplier application, the reference clock is often sourced at about 10 MHz from a traditional crystal oscillator. Ten MHz is a very comfortable range for crystals, and it's a good bet you already have one in your system. To multiply the clock, it is run into a PLLbased clock multiplication circuit. In a multiplybyten circuit, for example, the PLL aligns every tenth edge of the local oscillator to the reference clock, thus generating a 100MHz output. PLL technology can also be used to create zerodelay clock buffers, automatically adaptive skew correction circuits, and other neat features. The combination of PLL, output drivers, and skew correction circuitry is fabricated as a single chip. What can go wrong? Plenty. Suppose we are feeding rotten power to the crystal source (maybe it has 100KHz switching noise on it from the power system). If the crystal output violates the offset, wander, or jitter tolerance of the PLL circuit, the 100MHz output goes nuts. It may fail to lock, drifting to one end or the other of its range; it may flagellate up and down; or, depending on the PLL architecture, it may detect an absenceoflock condition and just shut off. What if the clock multiplier is built inside your processor (as with an Intel Pentium processor)? Then the quality of the incoming clock has everything to do with the quality of the resulting system. If you are using a clock multiplier or a PLLbased clock regenerator, make sure to comply with the specifications for offset, wander, and jitter on the reference clock input. If you have the specifications, test them; if you don't have the specifications, get them; and if your vendor won't fork them over, think carefully about the consequences before you move ahead with your system design. 
POINT TO REMEMBER
12.11.1 When Clock Jitter Matters
Clock jitter comes into play whenever you transfer data between synchronous domains that are controlled by independent clocks. At the boundary between the two domains there will inevitably occur at least one synchronizing register that accepts data from one domain yet is clocked by the other. If the relative clock jitter between the two domains is too great, it will violate the timing margins on the synchronizing register.
12.11.1.1 Clock Jitter Rarely Matters within the Boundaries of a Synchronous State Machine
In a simple, synchronous state machine with only one clock, what matters most is the duration of each individual clock period. An adequate measure of jitter in such a system would be a histogram of the clock intervals. A timing interval analyzer is an appropriate instrument for producing such a histogram. Some oscilloscopes can be configured to produce a clockinterval histogram.
Other than the clock interval being too short (or in machines that use poor digital design practices, being too long), no particular pattern of successive long and short intervals is any more damaging to ordinary synchronous logic than any other pattern.
Such is not the case when considering PLLbased architectures.
12.11.1.2 Clock Jitter Propagation
To understand the effect of jitter on a PLL ( phaselocked loop), you must first understand three general properties shared by all PLL circuits: the tracking range, the filtering range, and the implications of resonance with the PLL feedback control system. To explain these three concepts I'm going to introduce an analogy to an integrating control system with which you are probably already very familiar ”your car (Figure 12.53).
Figure 12.53. The state of each car along the roadway is described by its lateral position y ( t ) and angle of travel.
The steering wheel, through a complicated system of linkages and mechanical actions, controls the angle of travel of your vehicle. If you steer straight down the roadway, your lateral position doesn't change. If you steer somewhat to the left and keep moving at the same speed, the car moves linearly to the left (up in the picture) towards increasing values of y . Mathematically speaking, your lateral position y ( t ) along the roadway at any moment is the integral of your direction of travel. If this isn't clear to you, don't worry too much about the mathematics ”all you need to know is that there is a complicated and timedelayed relation between how you handle the wheel and where your car goes. [121]
[121] Those steeped in the art of control system design will recognize that the steeringwheel input determines the rate of change of the angle of travel, so that the entire relation between steeringwheel input and lateral position is that of a doubleintegral. It is the existence of this doubleintegration, plus a little bit of delay in your brain, that opens up the possibility of resonance.
Now let's play a highperformance racing game. Imagine you are drafting at 100 mph just inches behind the next driver on a long, straight section of interstate highway . It's your job to follow (track) the movements of the other vehicle as precisely as possible. The other driver is turning his wheel this way and that, trying to throw you off his tail.
If your opponent moves his wheel gradually, you have no difficulty tracking his movements. You see and respond to the graceful movements of his vehicle and have no difficulty following where he's going. This is your tracking behavior.
If your opponent grabs his wheel and violently shakes it, without changing the overall average direction of his vehicle, it makes almost no difference to your strategy. His car may vibrate terribly, but as long as you follow his average direction, you'll still probably be close enough to draft effectively. This is your filtering behavior. You don't even try to duplicate the shaking motion, you just filter it out.
Figure 12.54 decomposes your opponent's trajectory into its high and lowfrequency components. You track the lowfrequency part of his motions . These are the long, slow sweeping turns. You ignore his highfrequency behavior (the rapid shaking).
Figure 12.54. A complete trajectory is decomposed into a combination of lowfrequency and highfrequency movements.
Let's chart the frequency response of your steering system. To do this, have your opponent first begin moving his vehicle back and forth across the road in a slow, undulating motion y 1 ( t ) = a 1 sin ( w t ). Record the frequency w of his undulations, the amplitude a 1 of his undulations, and the amplitude a 2 of your response. As your opponent slowly increases his rate of undulation from slow to very, very rapid, make a chart showing the system gain a 2 / a 1 versus frequency.
At frequencies within your tracking range, you expect the amplitudes to match perfectly , so the gain is flat (unity gain) in this area. At frequencies within your filtering range, the gain should descend rapidly to zero, because in that area you don't respond. The interesting part happens at the boundary between these two ranges. Most drivers, as the lead car's undulations approach some critical rate, develop acute difficulties. Their response may lag significantly the motions of the lead car, and in their anxious attempts to make up for this delay they will overshoot the mark at the apogee of each excursion. As a result, the frequencyresponse chart exhibits a gain greater than unity at some particular frequencies. Severe overshoot appears as a large resonant peak in the frequencyresponse diagram. A system lacking any resonant peak is said to be welldamped .
A mild resonance at the tracking boundary can in some cases help minimize the average tracking error. The practice of causing a mild resonance at the crossover frequency is called PLL peaking . A peaking feature would be a good thing if yours is the only car in the experiment, but any sort of resonance, even a tiny one, spells disaster for a highly cascaded system.
For example, imagine a long chain of N cars drafting each other on the highway. Suppose the first car commences gyrations having a peaktopeak amplitude of 1 cm precisely at the resonant frequency. If the overshoot of each car at resonance amounts to 10% (a gain of 1.1 at resonance), the gyrating amplitude of car number 2 will be 1.1, car number three will be 1.21, and so on until at car N the gyrating amplitude will be 1.1 N . Fifty cars down the line the peaktopeak amplitude works out to 117 cm (if they don't careen off the road).
Chaining PLL circuits exponentially exacerbates the effect of resonance. A PLL designed for a chained application must be well damped (no resonance) at all frequencies.
In this analogy please note that you can measure the system gain either by looking at the ratio of amplitudes of the lateral positions of the cars or alternately by looking at the ratio of amplitudes of the steeringwheel inputs. Both measurements return precisely the same frequencyresponse graph. This works (for identical vehicles) assuming that for each car i , at each frequency, the relation between steering input s i and lateral response a i is the same.
Equation 12.9
where 
a 1 and a 2 are the amplitudes of the lateral position undulations of cars 1 and 2 respectively, and 
s 1 and s 2 are the amplitudes of the steeringwheel inputs required in cars 1 and 2 respectively to attain the lateralposition amplitudes a 1 and a 2 . 
This principle of similarity extends to measurements made of any matching quantities within the steering control system: steeringwheel inputs, hydraulicfluid pressures, tierod displacements, wheel angles, vehicle angles of travel, or vehicle lateral positions on the roadway. In a PLL the chart of tracking gain versus frequency is called the jitter transfer function.
Before I start to sound too much like Click and Clack, the Tappet brothers, [122] I'd better tie this analogy back to PLL design. Figures 12.55 and 12.56 illustrate the analogous relation between steering systems and clock recovery systems. In Figure 12.55 the angle of travel controls the lateral position of the car with an integrating action. Your eyes compare the position of the lead car with your own, and your brain determines how to best steer the vehicle.
[122] A carrepair radio show popular in the United States.
Figure 12.55. The racing game is described as a linear system.
Figure 12.56. A PLL may be described as a linear system.
Figure 12.56 illustrates the analogous control system used in a simple frequencytracking PLL. In this case it is the relative phase of the two input signals that the PLL is designed to control. In the top half of the diagram, the relation between the frequency of oscillation and the phase is shown as an integral. This is derived from the equation for the oscillator, y ( t ) = a sin w t , where if the frequency input w is held constant, the phase w t grows linearly without bound. This type of integrating relation holds between the frequency control input and the output phase of any VCO.
A PLL exhibits many characteristics similar to the highway racing game. It has a tracking range and a filtering range. At the boundary between the two ranges, the PLL control loop may resonate.
What's confusing about PLL terminology is that the main variable of interest is itself a frequency (the reference oscillator frequency), so when analyzing the circuit you have to contemplate the frequency of the variations in the reference oscillator frequency. In physical terms, if you imagine the reference input being FMmodulated, any FMmodulation waveform that occurs at a frequency below the tracking bandwidth is tracked, provided you don't exceed the maximum slew rate specification for the PLL.
The maximum slew rate is the maximum permitted rate of change of the VCO frequency. It is limited by the physical implementation of the VCO circuit, loop filter, and phase detector.
FMmodulation of the reference input at any frequency above the tracking bandwidth is filtered out. Highfrequency modulation, because it occurs in the reference signal but not in the reconstituted VCO output, comprises a source of phase error. In a digital receiver, if the phase error exceeds ±1/2 of a data interval, the receiver cannot properly decode the data. [123]
[123] In a practical system the limit is usually much less than ±1/2 of a bit interval ”more like ±10 or ±20 percent.
FMmodulation applied at a frequency in the transition band between the tracking and filtering range may result in controlloop resonance, exacerbating the degree of phase error at that frequency, particularly in chained systems.
Many variations of the basic PLL architecture are possible, including types that compare the internal VCO against multiples or submultiples of the reference clock or against various features extracted from data waveforms (see [94] , [95] ).
POINT TO REMEMBER
12.11.1.3 Variance of the Tracking Error
The tracking behavior of a PLL is equivalent to a linear filtering operation. The PLL acts like a lowpass filter. For example, in the racing game you track the lowfrequency part of your opponent's motions. These are the long, slow, sweeping turns. You ignore the highfrequency behavior (the rapid shaking).
In the frequency domain, let the lowpass filter F ( w ) represent your tracking abilities , and let the function Y ( w ) represent the Fourier transform of your opponent's trajectory. The Fourier transform of your trajectory Z ( w ) is therefore a lowpassfiltered version of your opponent's trajectory:
Equation 12.10
where 
filter F ( w ) represents your tracking abilities, 
function Y ( w ) represents the Fourier transform of your opponent's trajectory, and 

function Z ( w )represents the Fourier transform of your trajectory. 
The tracking error E ( w ) is the difference between your motion and the motion of your opponent.
Equation 12.11
where 
function E ( w ) represents the Fourier transform of the tracking error. 
The tracking error may be expressed differently as a filter [1 “ F ( w )] applied to your opponent's trajectory.
Equation 12.12
where 
filter [1 “ F ( w )] represents the trackingerror filter function, 
function Y ( w ) represents the Fourier transform of your opponent's trajectory, and 

function E ( w ) represents the Fourier transform of your tracking error. 
If the filter F ( w ) is a lowpass filter, then the filter [1 “ F ( w )] must be a highpass filter, in which case you may recognize that the tracking error is nothing more than the highfrequency part of your opponent's trajectory. It is a theorem of control systems analysis, therefore, that
The variance of the tracking error equals the variance of that part of your opponent's signal that falls above the tracking range of your filter .
Applied to a PLL circuit, this theorem relates the power spectrum Y ( w ) 2 of the reference phase jitter, the gain of the tracking filter F ( w ), and the variance of the tracking error:
Equation 12.13
where 
filter F ( w ) represents the gain of the tracking filter, 
function Y ( w ) represents the Fourier transform of the reference phase jitter, and 

represents the variance of the tracking error. 
Equation [12.13] appears in four formats. The top two formats integrate the power spectrum of the signal with respect to the frequency variable w , in rad/s. The bottom two formats integrate with respect to the frequency variable f , in Hertz, where 2 p f = w . The form of the integration is similar in both cases, but the constant term differs . This difference points out the importance of knowing whether the horizontal axis of a frequencydomain plot is expressed in units of rad/sec or Hertz.
In each row of [12.13], the lefthand expression shows integration over all positive and negative frequencies. This technique is called twosided integration . The righthand expressions shows integration over only positive frequencies with the results then doubled . The doubling trick works for the evaluation of power associated with realvalued signals, because the power spectrum of a realvalued signal is strictly real and an even function of w (or f ).
The following equations appear in only the topright format, as onesided integrations with respect to frequency w in rad/s. You may convert them to any of the four formats shown in [12.13].
Equation [12.13] is often simplified by assuming filter F ( w ) is a perfect lowpass filter with a brickwall cutoff at some frequency B ; in this case the integration need only be carried out from the cutoff frequency B to infinity. [124]
[124] A twosided integration would carry from  to  B , and then again from B to .
Equation 12.14
where 
filter F ( w ) is assumed to have unity gain below B and zero gain above B , 
the cutoff frequency B is in rad/sec 

function Y ( w ) represents the Fourier transform of the reference phase jitter, and 

represents the variance of the tracking error. 
In cases where the reference signal is a stochastic signal (as opposed to a deterministic signal) the calculation [12.14] is modified as follows :
Equation 12.15
where 
function S ( w ) represents the spectral power density of the reference phase jitter, and 
represents the variance of the tracking error. 
The power spectrum S ( w ), already being a measure of power, does not need to be squared.
POINT TO REMEMBER
12.11.1.4 Clock Jitter in FIFOBased Architectures
Suppose digital state machines A and B each independently use PLL circuits to synchronize their clocks to a common reference (Figure 12.57). Let the common reference frequency be 8 kHz. [125] The clock frequency in each section is 622 MHz, roughly 77,750 times the reference frequency. Data proceeds from section A , through the FIFO, into section B . Theoretically, once the FIFO gets started, it should stay filled at a constant level because the input and output rates are the same.
[125] A common telecommunications reference clock frequency.
In practice, however, the two clocks are hardly the same. The common timing reference signal comes along only once every 77,750 clocks, leaving plenty of time for the two clocks to diverge between reference edges. In the highway racing analogy, this architecture is the equivalent of putting a blindfold over your eyes and permitting you only one quick glimpse of the car in front once every 77,750 car lengths. Obviously, substantial errors may accumulate.
Shortterm frequency variations between the two clocks cause the number of words held in the FIFO to gyrate wildly. In general, the greater the ratio of frequencies between the FIFO clock and the reference clock, the greater the gyrations. If the gyrations become too wild, the FIFO either overflows or runs empty.
The maximum deviation in the FIFO corresponds to the maximum phase difference between the two clocks, not the maximum frequency difference. Those familiar with the calculus of PLL circuits may recall that phase is the integral of the frequency. In other words, if the frequency difference between the two clocks diverges by x rad/s and holds at that level for t seconds, the accumulated phase difference during interval t would be xt . For example, a frequency offset of just one part in 10 4 , averaged over a period of 77,750 cycles, would result in 7.775 clocks of phase offset by the time the next clock arrived.
A good measure of performance in this system would be the frequency stability over a period of time T . The frequency stability D f may be defined as the worstcase difference between the minimum and maximum number of clock cycles within period T divided by the length of the period T . Appropriate units for D f are cycles/sec (Hz).
In burstoriented systems the period of time T usually corresponds to one packet, or one complete data transaction. As long as the clocks don't drift with respect to each other more than N complete cycles within a packet, a FIFO of length 2 N is sufficient to couple the systems (where N = D fT ). [126]
[126] Preload at least N words into the FIFO before starting your transfer. If the receiver is fast, the FIFO will run completely dry at the end of the transfer. If the transmitter is fast, the FIFO will build to 2 N words by the end of the transfer.
PLL designers shudder when they see block diagrams like Figure 12.57. Reducing the ratio between the FIFO clock and the reference clock (i.e., distributing an 8 MHz reference instead of an 8 KHz reference) would significantly relax the requirements for PLL stability in this system.
Figure 12.57. Jitter between imperfectly synchronized highspeed clocks causes the number of words held in the FIFO to fluctuate.
POINT TO REMEMBER
12.11.1.5 What Causes Jitter
Most oscillators include at least one resonant circuit (or delay element) and one amplifier (or comparator). Jitter in such an oscillator results from at least four superimposed noise sources. First, if you are using a crystal oscillator, noise emanates from the random movement of electrons within the crystal ( thermal noise) . [127] Second, any mechanical vibrations or perturbations of the crystal cause noise ( microphonic noise ). The third noise source stems from the amplifier or comparator used to construct the oscillator ( selfnoise ). The amplifier's contribution is often larger than thermal and mechanical noise from a crystal. The last and potentially most troublesome noise comes from the power supply. Any coupling of an oscillator's power terminals to its sensitive amplifier input sends power supply noise roaring through the amplifier, causing massive amounts of jitter. An oscillator that couples power supply noise into its output is said to have poor power supply immunity . Many oscillators do.
[127] Oscillator circuits using LC tanks, delay lines, or semiconductor delay elements all display similar electrical and mechanical noise effects.
These four sources of noise appear together at the output of every oscillator or PLL circuit. Because an oscillator always involves feedback circuits, the same noise is also coupled back into the resonant circuit (or delay element) used to produce the oscillations in such a way that it influences future behavior. In this manner the noise causes both shortterm and longterm frequency perturbations. The statistics of such fluctuations are beyond the scope of this book.
In addition to the intrinsic jitter from its internal oscillator, a PLL circuit will propagate any jitter from the reference source that falls within the tracking bandwidth of the PLL.
POINT TO REMEMBER
12.11.1.6 Random and Deterministic Jitter
Many circuits produce a repetitive, predictable jitter. This effect happens in cheesy clockmultiplier circuits and poorly equalized data recovery units. The predictable component of jitter in these circuits is called deterministic jitter. The remaining components of jitter are called random jitter. The presumption is usually made that the deterministic and random jitter components are not correlated ( [94] , [96] , and [97] ).
To measure the deterministic jitter on a clock (or data) waveform, you must trigger your oscilloscope at a rate commensurate with the source of the deterministic jitter. For example, in an 8B10Bcoded data waveform transmitting a repetitive 10bit test pattern, a trigger frequency of 1/10 the data baud rate would be appropriate. For another example, in a clockmultiplier circuit the input reference clock frequency would be appropriate.
The scope must be set to average its measured results, which nulls out all the random jitter, leaving you with a clean picture of a repetitive (though slightly distorted ) timedomain waveform. The deterministic jitter is the difference, at each transition in the repetitive sequence, between the actual time at which the transition occurred and the ideal time, in a perfect system, at which the transition should have occurred.
The vector of differences is processed to find the average value, and then the variance, of the measured points. This process puts you in possession of one piece of information: the variance of the deterministic jitter.
By measuring the variance of the overall jitter waveform (including both deterministic and random jitter) you can then infer the variance for the random jitter alone:
Equation 12.16
where 
, and are the variances of the random, deterministic, and total components of jitter. 
Deterministic jitter comes from many sources, including dutycycle distortion (DCD), intersymbol interference (ISI), and wordsynchronized distortion due to imperfections within a data serializer (e.g., bit 3 of each data word always appears early).
The point of separating jitter into random and deterministic components is that the deterministic components have a lower ratio of peak value to standard deviation than do the random components. Measured only according to the standard deviation, a certain amount of deterministic jitter doesn't hurt as much as a similar quantity of random jitter.
In a system that combines deterministic and random jitter, therefore, a single specification of the acceptable standard deviation of jitter will always be overly stringent.
Example Showing Calculation of Standard Deviation for Deterministic Jitter
I'll show the evaluation of standard deviation for a very simple case first, and then a more complicated one.
Let r represent a discrete random variable that takes on only two values, x 1 and x 2 , remaining at each value on average half the time.
The mean value of r is
The peak excursion of r on either side of the mean is
The variance of r is
The standard deviation of r (square root of variance) is
The ratio of peak magnitude to standard deviation is
Given a more complicated discrete random variable r that takes on values x i with probabilities p i , and has mean value m r , the variance of r is calculated
If r were distributed in a Gaussian manner, the peak magnitude could (theoretically) range upward without bound. In the analysis of practical systems the ratio of the effective peak magnitude to standard deviation is calculated depending on the BER (biterror rate) at which the system must operate . Table 12.4 indicates the ratio of the effective peak magnitude to standard deviation for Gaussian waveforms assuming various standard BER values. The table values are computed such that the magnitude of Gaussian noise will not on average exceed the stated peak value more often than once every 1/BER bits.
Table 12.4. Gaussian Waveform Probabilities
BER 
Ratio of peak deviation to standard deviation 

1E04 
3.891 
1E05 
4.417 
1E06 
4.892 
1E07 
5.327 
1E08 
5.731 
1E09 
6.109 
1E10 
6.467 
1E11 
6.807 
1E12 
7.131 
1E13 
7.441 
1E14 
7.739 
Example Showing Acceptable Standard Deviation for Jitter
Assume you are willing to accept data errors at a BER of 1E12. Make the worstcase assumption that all your jitter is Gaussian.
Under these assumptions, if your circuit can tolerate a peak phase error of 0.3 radian before making an error, then to achieve your target BER you must limit the standard deviation of total jitter to a level (according to Table 12.4) not exceeding 0.3/7.131 = 0.042 radians. For a system that combines random (Gaussian) and deterministic jitter, this specification is overly stringent.
To derive a better specification, first subtract from your worstcase total noise budget the known, worstcase amount of deterministic jitter. There is no need to multiply this component of jitter by 7.131 in the budget. It already represents a "worstcase" event.
Divide the remaining noise budget by 7.131 to establish a limit on the standard deviation of random jitter. The overall solution will meet your BER requirement.
For example, a specification calling for no more than 0.1 radian of deterministic noise plus random noise with a standard deviation not to exceed 0.028 radians meets the BER requirement of 1E12 while providing a reasonable budget for deterministic noise.
POINT TO REMEMBER
12.11.2 Measuring Clock Jitter
There are many approaches to measuring clock jitter, including spectral analysis, direct phase measurement, differential phase measurement, BERT scan, and Timing Interval Analysis.
Spectral analysis is performed with a highquality spectrum analyzer. The spectrum of a perfect clock consists of infinitely thin spectral peaks at harmonics of the fundamental frequency. Close examination of a jittery clock spectrum reveals a tiny amount of spreading around the fundamental frequency and around each harmonic . This spreading relates to clock jitter. Simply put, when a clock spends part of its time at frequency F , we see a peak there corresponding to what percentage of the time it lingered at that frequency. Spectral analysis is very popular with communications engineers .
The problem with spectral analysis is that it does not directly address the issue of phase error. The spectrum tells us what frequencies the clock visited but not how long it stayed. For example, a clock that lingers too long away from its center frequency accumulates a big phase error. A clock that deviates back and forth quickly about its center frequency may visit the same frequency for the same proportion of time, but stay so briefly each visit that it accumulates almost no phase error. From the spectrum alone, you cannot determine the maximum phase deviation from ideal unless you are willing to make the narrowband phase modulation assumption:
Assume the clock never deviates more than one radian from the ideal .
Under this assumption you can model a phasemodulated clock as if it were a perfect sinusoidal clock at frequency f to which you have added a small amount of noise, also at frequency f , like this:
Equation 12.17
As long as a ( t ) remains less than one radian, the zero crossings of the signal [12.17] will occur at almost the same locations as in the following phasemodulated signal, where q ( t ) equals a ( t ).
Equation 12.18
Using [12.17] you can model any sinusoidal signal a ( t ) with small amounts of phase modulation as a combination of one main sinusoidal carrier plus another amplitudemodulated carrier at the same frequency, but in quadrature with the first signal. The instantaneous magnitude of the modulating signal a ( t ) is the same as the instantaneous phase jitter q ( t ), where q ( t ) is taken in units of radians.
What this all means is that when you look at the spectrum of a phasejittery clock, what you see is one big whopping peak near the fundamental and a lowerlevel spreading of power around the peak. The big peak represents the power in the main sinusoidal carrier. The spreading represents the power present in the noise process a ( t )cos( w t ). The ratio of the power in the noise process a ( t )cos( w t ) to the total power in the main carrier precisely equals the variance of the phase jitter.
To obtain from the spectrum a maximum phase error (which is what one needs to solve certain FIFO problems), you must combine the power spectrum measurement with some assumption about the nature of the underlying probability distribution of the phase jitter. From the power spectrum measurement, you compute the variance of the distribution, and from knowledge of the properties of the assumed distribution, you may then compute the probability that the phase error q ( t ) will exceed some arbitrary limit (see "Jitter and Phase Noise" article below).
Direct phase measurement requires access to an ideal clock that is compared to your jittery clock with a phase detector. The phase detector output shows just what you want to know: how much the clock jitters. The obvious difficulty with this approach is getting an ideal clock. You might try filtering the jittery clock through a PLL to create a smooth clock having the same average frequency. The phase error output from the PLL will be the jitter signal you seek. This is known as the "golden PLL" method.
If you are measuring jitter from a highquality frequency source, it may not be easy to build a golden PLL with significantly less intrinsic jitter than your source. This method develops difficulties when measuring phase errors that exceed the bit interval. To solve that problem, try working with a divideddown clock. Measured in units of clock intervals, an error of x in the main clock produces an error in a dividedby n clock of only x / n .
Differential phase measurement compares a jittery clock not to an ideal clock but to a delayed version of itself. At a large enough delay, the delayed waveform may become uncorrelated with the original, giving you the effect of two similar, but different, jittery clocks. The resulting differential jitter is twice the actual jitter. The advantage of using a delayed version of the original clock is that it naturally has the correct average frequency.
A differential jitter measurement requires an oscilloscope with a delayed timebase sweep feature. First set your oscilloscope to trigger on the clock waveform. Then, using the delayed timebase sweep, take a close look at the clock some hundreds, thousands, or tenthousands of clock cycles later. Jitter shows up as a blur in the displayed waveform.
Before assuming the blur comes from jitter on the clock, take a look at a stable clock source using the same setup. If it looks clean, you can then assume your scope time base is accurate enough to perform this measurement.
While adjusting the delay interval, you may notice that the jitter gets worse or better. This is normal. Clock jitter normally is worse in some frequency bands, which leads to maxima in the expected differential jitter at certain time delays. Beyond some maximum time delay, the jitter becomes completely uncorrelated and there is no longer any change in jitter with increasing delay.
If through some test procedure, you have intentionally created a large amount of jitter (i.e., FMmodulation of the clock) with a particular period T , the greatest jitter in the output will be observed at time T /2 (and successive odd multiples of T /2).
If the peaktopeak amplitude of the phase jitter amounts to more than half a clock period, successive edges will blur together, becoming very difficult to see. In that case, divide the clock by 2, 4, or more using a counter circuit before displaying it. The division doesn't change the worstcase jitter on individual clock edges, but it does lengthen the space between nominal clock transitions so that you can see the jitter.
Jitter measurements on precise crystal clocks require an extremely stable time base and can take a long time to perform. Jitter measurements performed on noncrystal oscillators used in serial data transmission are much easier to do, owing to the much greater intrinsic jitter of those sources.
BERT scan measurements are used to quantify the jitter present on serial data transmission systems. In these methods a serial data stream with a known pseudorandom data pattern is fed into the BERT test instrument. The BERT contains a golden PLL capable of perfectly extracting an ( ideally ) zerojitter clock from even the noisiest waveform. The golden PLL clock edge is adjustable within the data window.
The BERT attempts to recover the data, adjusting its ideal clock back and forth across the data window, producing a graph showing the biterror rate as a function of the clock position. The biterror rate graph thus produced is called a BER bathtub curve . It is so called because at either extreme, as the clock approaches the transition period leading to the next bit, the BER jumps to nearly unity, while in the middle of the curve there is (one would hope) a flat region of zero errors. The shape of the curve resembles a bathtub.
From the slope of the sides of the bathtub curve you may extract information about the statistics of jitter. The ANSI study [97] goes into great detail about the extrapolation of actual BER performance data based on limited measurements of BER bathtub curves.
Timing interval analysis accumulates a histogram of the intervals between successive clock (or data) edges. For example, an accumulation of the histogram of the fine variations in spacing between clock edges separated by a large interval T is equivalent to the information gathered by a differential phase measurement at delay T , but with the advantage that the data is recorded in a form from which the statistics may be easily derived.
Of all the types of measurements mentioned, the manufacturers of timeinterval analysis equipment seem most interested in providing tools and software useful for the analysis of jitter.
PLL loop testing is possible if the oscillator under test is controllable with an input voltage. This test uses the oscillator under test as the VCO in an artificially constructed laboratorygrade PLL. An ideal clock is fed into the artificial VCO as a reference. The loop bandwidth of the artificial PLL must be much less than the bandwidth of the VCO phase jitter that you propose to measure.
The PLL structure eliminates lowfrequency wander in the oscillator under test, making it easier to see the phase jitter of interest. The output of the artificial PLL phase detector (with a suitable lowpass filter) is your direct phase error measurement. This output can be observed using a lowbandwidth spectrum analyzer or oscilloscope with FFT processing. This approach is very closely related to the golden PLL method described previously.
12.11.2.1 Jitter Measurement
High Speed Digital Design Online Newsletter , Vol 3, Issue 22Ravi writes I would like to know which is the best way to measure signal jitter using a digital oscilloscope. Here's my situation:
What type of jitters can be measured using what types of oscilloscopes, and how should one go about it? Reply Thanks for your interest in HighSpeed Digital Design . There are (at least) three jitter topics that might interest you:
A full model of the PLL noise output combines all three effects. Before I describe the measurement techniques in detail, let me make a general point about the relative difficulty of these three measurements. Measurements A and B are made by injecting a known disturbance into your system and observing the result. In the test setup for A and B you have the freedom to inject a rather large disturbance, which simplifies the measurement task (because you will be looking at a big result). Measurement C will be more difficult, because you will be observing very tiny amounts of phase modulation, and it may be difficult to determine the source of the noise. Now let's go on to the details. To measure A , you need to generate a fake Hsync signal. The fake Hsync signal is phasemodulated with an adjustable sinusoidal source. Call the modulation rate MR and the modulation amplitude (in peaktopeak radians) MA . Many highquality RF signal generators can be FMmodulated (or PMmodulated) in this way and used as a fake Hsync source. While you apply the fake Hsync signal, observe the PLL output with a scope. If the PLL produces a high multiple of the Hsync clock rate, you might want to use a divideby N counter to reduce the PLL output to a more manageable frequency. Set the scope to trigger on the PLL output, delay by 1/2 the period of the FM modulation (that's 0.5/MR), and then display the PLL output. If the modulation frequency is low enough for the PLL to track it, any modulation in the Hsync input will appear directly in the PLL output. Using a horizontal timebase delay of (0.5/MR), you will see the displayed edge switch at a range of times corresponding to the maximum peaktopeak phase modulation amplitude (MA) of the fake Hsync source. If the modulation frequency MR is high enough that the PLL filters it out, the display will appear rock steady. The frequency at which a PLL begins to filter out jitter in the input signal is called its cutoff frequency . At intermediate frequencies, you may find a large peak in the PLL transfer function (a place where the ratio of output phase deviation to input phase deviation exceeds unity). PLL circuits for data communications applications shouldn't have such a peak. The location of the cutoff between the tracking frequency and the filtering frequency, and the magnitude of the intermediate peak (if any), together constitute a good way to characterize the PLL jitter transfer function . This function is usually plotted on loglog paper showing the jitter transfer function (ratio of output phase deviation to input phase deviation) as a function of frequency. If you are planning to chain your PLL circuits, you must ensure that the jitter transfer function does not have any peak or resonance at intermediate frequencies. For example, the onchip PLL circuits used in the original version of the IBM tokenring LAN were "peaked," creating a small resonance near the cutoff frequency. This is a common technique used in controlcircuit design. It tends to improve the lockon characteristics, reducing the amount of time needed for the circuit to lock onto a fresh input signal. The disadvantage of peaking, in the tokenring example, is that by the time the standards committee completed its work on the standard, the number of elements allowed in the ring had been enlarged from the original 16 to a new value of 256. Obviously, such a change is quite good for marketing purposes, but very bad for the jitter transfer function. If, for example, each PLL in the original design had on average only about 1/2dB of peaking near the cutoff frequency, then when you chain together 256 such parts, with each PLL synchronizing on the data signal passed around the ring from the previous station, the total gain at cutoff would be 128 dB. In this type of circuit even the tiniest intrinsic jitter at a frequency near cutoff would be amplified 128 dB as it passed around the ring, causing total system failure. As a consequence of this and other mistakes made in the implementation of the early tokenring circuits, token ring lost the LAN wars and we have an Ethernetdominated LAN landscape today. To measure B , you will make a test much like A , but instead of modulating the Hsync input, this time you will modulate the PLL power supply voltage (Figure 12.58). Do this by injecting sinusoidal noise directly onto the V CC terminal of the PLL. If there is more than one V CC input, then test each input independently. If your circuit incorporates a good power supply filter that prevents you from injecting noise into the V CC terminal of the PLL, remove some of the bypass capacitors on the PLL side of the filter until you are easily able to inject substantial amounts of noise. Always ACcouple your sinusoidal source to the V CC terminal of the PLL using a time constant R 1 C 1 sufficiently large to pass the lowest frequency of interest. Figure 12.58. This test circuit can be used to measure the power supply noise tolerance of an oscillator or PLL. While you apply sinusoidal V CC noise at frequency F and amplitude X , observe the PLL output with a scope. As before, set the scope to trigger on the PLL output, delay by 1/2 of the period of F , and then display the PLL output. I like to do this test starting with F below the tracking bandwidth of the PLL and sweeping up to somewhat beyond the PLL output rate. At each frequency F , adjust the scope horizontal timebase delay to match 0.5/F, and then tweak the amplitude X of the sinusoidal V cc noise until you get a standard amount of objectionable phase jitter in the output. You can use any standard objectionable output jitter level, perhaps 0.1 times the output clock period or some other amount that you suspect might begin to cause a problem. The amount you set as your standard objectionable level should (hopefully) be large enough to easily measure. Make a plot showing, as a function of frequency F , the maximum amplitude X of V cc noise the circuit can tolerate before the output jitter comes just up to the objectionable level. You may combine this basic data with another plot that shows how much noise is already present in your power system as a function of frequency to tell you how much power supply filtering you will need and what must be its frequency response. This is the only rational way I know to design a power filter for a PLL. Sometimes you find a frequency range where the oscillator becomes very sensitive to power supply noise. This effect usually results from insufficient power supply filtering inside the oscillator. The poor tolerance curve shown in Figure 12.59 displays symptoms of ineffective power filtering. Figure 12.59. A noise tolerance chart shows how much power supply noise your circuit can tolerate at each frequency. Another, more serious, effect is squelching . At some injected noise frequency the power supply filtering components internal to the oscillator may resonate. A low injected noise voltage at this frequency causes extreme amounts of jitter. A high injected noise voltage at this frequency may disrupt the action of the internal amplifier, stopping oscillation altogether. A stopped oscillator is said to be squelched. Sometimes it takes a while after squelching for the oscillator to start working again. To measure C , you can try using the scope to make phasedeviation measurements at various delay intervals, but the results will likely be unsatisfactory. Measurements A and B are easy to make because you are injecting a huge phase deviation (perhaps as much as 0.1 bit interval or more), and the resulting phase jitter is easy to see. For test C you need an instrument that can measure tiny amounts of jitter. Use either a timing interval analyzer or a spectrum analyzer. Either can be used to make a measurement of the total variance of the total phase deviation. When you make this test, you will want to carefully filter the PLL power supply and use an ultra clean, lowjitter Hsync clock, to eliminate noise from those two sources so you can see the remaining intrinsic noise of the PLL circuit. 
POINT TO REMEMBER
12.11.2.2 Jitter and Phase Noise
High Speed Digital Design Online Newsletter , Vol 4, Issue 7 Bill Stutz writesI don't know if this falls into your areas of expertise, though your excellent articles and book lead me to believe you might be able to help! My question has to do with jitter. In many serial digital systems jitter is specified. The specification is usually given in units of absolute time. For example, SMPTE specifies the jitter on the parallel clock of an SDI serializer as 370 picoseconds peaktopeak for a clock frequency of 27 MHz. When serializing a 10bit data stream at 270 Mb/sec, this amounts to +/ “ 0.1 UI (unit interval) of jitter. This jitter is specified for offset frequencies between 10 Hz and 1/10 the serial clock rate. I intend to make my 27MHz clock from the horizontal sync frequency of my baseband video using a PLL. The PLL will be based on a VCO, for which I have a plot of phase noise in dBc versus frequency. Can I calculate what the intrinsic jitter of this oscillator will be from its phase noise plot? How would I do that? Any help or light you can shed on this problem would be appreciated. Reply Thanks for your interest in HighSpeed Digital Design . I've always wanted to know how to do the same calculation, so I researched the math and came up with some good information for you. Here's what you need to know. For small amounts of jitter (like 0.1 UI or less), you can use what is called the narrowband phase modulation assumption to perform your analysis. What this says is that you can model a clock system as if it were receiving a sinusoidal clock at frequency f , to which you have added a small amount of noise, also at frequency f . The noise has two important properties. First, the noise is assumed to be in quadrature (90 degrees outofphase) to the main clock sinusoid. Second, the noise is amplitude modulated. If you get out a piece of paper and draw a phasor diagram, you will see that the addition of small amounts of quadrature noise to a sinusoid merely accomplishes a little bit of phase modulation. In other words, you can model any sinusoidal signal with small amounts of phase modulation (which is what you have) as a combination of one main sinusoid and another amplitudemodulated carrier at the same frequency, but in quadrature with the first signal. What this all means is that when you look at the spectrum of a phasejittery clock, what you will see is one big whopping peak near the fundamental and a lowerlevel spreading of energy around the peak. The spreading represents the energy present in the modulating signal. Now your PLL circuit has a certain tracking bandwidth that will filter out all the phase noise within a certain bandwidth B of the first harmonic. This part of the noise is of no concern. The only phase noise that will escape your PLL is the phase noise that lies further away than B from the main fundamental. In your case, you have told me that the tracking bandwidth of the relevant circuit is 10 Hz, meaning that all the noise further away than 10 Hz from the main peak will add to your jitter. To find the total power of the modulating signal, you will have to integrate (by hand, with a calculator) the power in the noise surrounding the main signal. If the spectrum analyzer is adjusted to read out in units of decibels per squarerootofHertz, you just take samples of the noise level every so often, convert each reading to watts/Hz, multiply each reading by the number of Hertz between readings , and add up the results (in units of watts). That's how you perform the integration. [128] To find the total power in the main signal, you use the same integration method, but this time integrating the power over the big fundamental peak. Use a lot of points for this integration on a spacing that is narrow compared to the bandwidth of the instrument. The ratio of the noise power to the power in the fundamental equals the variance (standard deviation squared) of the phase modulation in units of radians squared. Take the square root of this ratio to find the standard deviation of the phase modulation in units of radians. This is the RMS value of the noise signal. Now you need to translate this standard deviation into a peaktopeak value. To do that, you will need to make an assumption about the statistics of the noise. Assuming the noise is Gaussian (and not the result of some deterministic, predictable phase wander), one normally figures that if the BER of the system is specified at 1E12, then it's okay to violate the phase jitter spec one time out of every 10 12 . In numerical terms, what I'm saying is that it's probably okay if the phase jitter occasionally exceeds +/ “ 0.05 UI (that's 0.1 UI peaktopeak) as long as it doesn't do so more often than one time in 10 12 . The peaktopeak spread between the 1E12 probability tails on a Gaussian distribution is about 14.3 standard deviations (twice the value in Table 12.4 for a BER of 1E12). If you want the peaktopeak deviation (at 1E12 BER) to equal 0.1 UI, you require a standard deviation of less than 0.1/14.3 UI, or when translated into radians, a standard deviation of less than (0.1 ·2 p )/14.3. For different BER levels you have to adjust the factor of 14.3 according to Table 12.4. More details are available about this method in [94] , [97] , and [98] . 
[128] To find the total power you must integrate over both positive and negative frequencies. Alternately, you can just integrate over only positive frequencies (onesided integration) and then double the result. If all you want are ratios, then you may skip the doubling.
POINT TO REMEMBER
Fundamentals
Transmission Line Parameters
Performance Regions
FrequencyDomain Modeling
Pcb (printedcircuit board) Traces
Differential Signaling
Generic BuildingCabling Standards
100Ohm Balanced TwistedPair Cabling
150Ohm STPA Cabling
Coaxial Cabling
FiberOptic Cabling
Clock Distribution
TimeDomain Simulation Tools and Methods
Points to Remember
Appendix A. Building a Signal Integrity Department
Appendix B. Calculation of Loss Slope
Appendix C. TwoPort Analysis
Appendix D. Accuracy of Pi Model
Appendix E. erf( )
Notes