Measuring QoS in Data Networks

 < Day Day Up > 



In addition to packet drops, two other QoS measurements must be considered: jitter and latency. Combined together, these elements are experienced by the end user as a transmission delay.

Latency is the amount of time that it takes a signal to move from point A to point B in a System Under Test (SUT) with no load conditions. Figure 8.3 shows an SUT in which the signals applied to input A are sent through the SUT to the output B. The leading edges of A and B are compared, and the delta measurement becomes the latency measurement. Since latency is consistent, test measurements should be repeatable.

click to expand
Figure 8.3: Latency Measurements

When a system experiences heavy loads, the data must be buffered and queued as a result. When an SUT is subjected to a heavy load, the signal out of port B will vary in its amount of delay. This variation is inconsistent and unpredictable. Using an oscilloscope, the packet at probe B appears to jump back and forth across the oscilloscope screen. This rapid back and forth movement is called jitter, and its presence is an unpredictable element in packet delivery (see Figure 8.4).

click to expand
Figure 8.4: Jitter Measurements

Mediation for Dropped Packets, Latency, and Jitter

System latency under light-load conditions can be controlled only by good end-to-end equipment selection. System latency is the cumulative (end-to-end) value of all equipment latency measurements, plus the latency of the links.

Controlling bandwidth utilization can, in turn, control both jitter and dropped packet percentage rate. Since latency is a measurement of delay that is caused by the movement of electrons across a system, latency cannot be controlled in real time. Low latency must be designed into a network from the start.

As network utilization increases, so to do the problems of jitter and dropped packets. In an effort to give better performance, many systems are over-designed with more bandwidth than is needed in order to help ensure that these problems do not occur.

The chart in Figure 8.2 illustrated an increase in dropped packets as the system under test approached 80% utilization (thereby making the MOS score unacceptable), but a system running under various load conditions, as shown in Figure 8.5, shows acceptable MOS and dropped-packet percentages.

click to expand
Figure 8.5: Low Utilization with Low Errors

Testing Networks

Over the past 4 years, we have been testing networks for QoS. There are several methods for testing networks or systems. A good laboratory method would be to use a product like “smart-bits” to generate traffic and to measure the results.

A simple experiment that most engineers can perform that limits cost uses NetMeeting and a protocol analyzer to measures voice quality, jitter, and dropped packets. Figure 8.6a shows a typical setup.

click to expand
Figure 8.6a: Network Under Test for Quality of Voice Calls

We have tested networks and network components and we have found that network devices operate like most electronic devices. As long as they are operated within the linear range of the device, performance is good.

When the devices are forced to operate in a non-linear range, performance problems are noticed. This can be seen in Figures 8.6b and 8.6c. In Figure 8.6b we see that as utilization goes up, the number of dropped packets increases in Figure 8.6c we see that the MOS score decreases as utilization increases.

click to expand
Figure 8.6b: Percentage of Dropped Packets vs. Percentage of Load

click to expand
Figure 8.6c: MOS Score Is Inversely Proportional to Load

What surprises most customers when we perform this test is how different voice is over data. Voice performance has failed to pass the test criteria with as little as 15% continuous broadcast traffic.

These results differ across networks and network elements. This is why it is advisable for all organizations considering running VoIP to test their own network

By looking at these performance charts, we would think that a simple solution is to keep utilization below the saturation point of the network so that devices and the network perform well. This strategy is known as the “throw bandwidth at the problem” solution.

Bandwidth Does Not Solve the Problem

Over-designing a network and throwing bandwidth at QoS problems is only a temporary fix – not a solution. There are several reasons why bandwidth alone will not achieve true Quality of Service:

  • The “if you build it, they will come” phenomenon. The faster the network is, the more user traffic it will have. More user traffic means more bandwidth demand and so on;

  • Using your data network for VoIP calls; or

  • There is a link failure and you take a “loser’s path.”

If you equate bandwidth to a four-lane highway that is equipped to handle a load of only 20 cars per minute, then what happens when the traffic demand suddenly exceeds 20 cars a minute? What happens when construction causes other traffic to be routed on your highway? What happens when there is an accident? Bandwidth alone does nothing to address the three elements that are needed to achieve real QoS – marking, classifying, and policing packets.

For the last several years, attempts to achieve end-to-end QoS have been made, with marking protocols (802.1Q/p, DiffServ, and MPLS), reservation protocols (such as IntServ and RSVP), and policing devices (such as policy switches).

Checkpoint 

Answer the following true/false questions.

  1. Latency is a measurement that is performed under traffic load conditions.

  2. Load conditions do not affect jitter.

  3. Packet drops under 10% are considered good.

Answers: 1. False, 2. False; 3. False.



 < Day Day Up > 



Rick Gallagher's MPLS Training Guide. Building Multi-Protocol Label Switching Networks
Rick Gallahers MPLS Training Guide: Building Multi Protocol Label Switching Networks
ISBN: 1932266003
EAN: 2147483647
Year: 2003
Pages: 138

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net