QoS in MPLS Networks

 < Day Day Up > 



Now that we have learned about how to achieve QoS from end to end, we need to take a look at how MPLS can assist us in achieving end-to-end QoS.

QOS and COS

First, let’s explore the difference between QoS and CoS (Cost of Service).

CoS is a term that is used in ATM networks and is defined by ATM standards. CoS allows for traffic to be placed into different queue.

QoS defines ways to achieve traffic behavior that is objectively measurable. QoS guarantees end-to-end performance.

In CoS, many people think of QoS as it relates to Frame Relay or ATM.

Groupings:

  • Unreliable, don’t-care applications

  • Unreliable, time-sensitive applications (VoIP)

  • Reliable, non-time sensitive applications

  • Reliable, time-sensitive applications

These groupings (Figure 8.15) could be broken down into ATM types, such as CIR, VIR, and UBR jitter. Some of these are latency applications (such as SNA networks) or synchronized databases.

Long before the days of MPLS, ATM and Frame Relay provided Quality of Service, and carriers were committed to delivering levels of service as defined and policed by the FCC. There is much concern as to whether MPLS can accommodate these QoS requirements, and as regards the ability of MPLS to satisfy a given Service Level Agreement (SLA). The fines are stiff for SLA violations, so public carriers are cautious about adopting the new technology (and QoS measures for MPLS). As you can see from Figure 8.14, there are methods to map CoS and QoS parameters.

click to expand
Figure 8.14: QoS – CoS

click to expand
Figure 8.15: CoS – QoS Mapping

Myriad levels and mechanisms exist for achieving QoS. One can think of QoS groupings as being analogous to flight bookings with an airline – first class, coach, and standby. In IP, we call these grades of service Guaranteed, Control Load and Best Effort.

In addition, QoS can be defined with far greater granularity than this would suggest, but there are issues of manageability and marketing. Just how many levels of service do the clients demand, and what are the operational costs of providing these services?

Having cleared that up, let’s look at how to map traffic to MPLS QoS, examining problems and points at which a network needs to be managed.

Mapping L-LSP VS. E-LSP

So far we have shown that markings from the LAN or WAN could be mapped directly to the MPLS header using the EXP bits. This method has become known as the E-LSP method (or EXP-inferred-PSC LDP). With only three bits in the EXP field, there are eight (8) classes that can be mapped.

The other method of mapping CoS/QoS is to mail a label to an FEC with QoS parameters. For example, label 100-200 would be First Class on LSP-A. This method is called the L-LPS method (Label-only-inferred-PSC-LDP). The L-LSP method is more flexible than the E-LSP method, but to date it has not been implemented. See Figure 8.16.

click to expand
Figure 8.16: E-LSP /L-LSP

IP Traffic Trends

Service providers typically subscribe CIR rates at a 1-1 ratio and VBR at a 3-1 ratio. They generate a great deal of revenue by subscribing their IP traffic at a 50-1 ratio.

This over-subscription at the edge of the network generates profits, but causes unpredictable behavior in the network. We find that, even if the core of the network has ample bandwidth, QoS problems surface during peak busy hours because the edge routers are overworked; not having sufficient instant bandwidth, they experience queuing delays or even lost packets.

In Figure 8.17, we see so many packets attempting to enter the router at one time that only a few can squeeze through.

click to expand
Figure 8.17: Too Many Packets Trying to Enter Router

This traffic must be managed, and queue controls must be in place in order to avoid irreparable loss of service. Several queuing methods could be used in this situation. The simplest method is Random Early Detection (RED). The RED method looks at a queue and determines when traffic should be discarded.

In Figure 8.18, we see the basic rules of RED. There is only so much memory in a queue, and when it becomes saturated, non-optimal (bad) things begin to happen to the packets. In RED, upper and lower limits (thresholds) are set.

click to expand
Figure 8.18: Basic RED Rules

In this case, 40% is set for the lower limit and 90% is set for the upper limit.

The rules are simple: all traffic that is below the lower limit (threshold) will be preserved, and all traffic that extends beyond the upper limit will be discarded.

Traffic between the upper and lower limits has a probability of being discarded, and the probability of discard increases as the number of packets increase.

In Figure 8.19, you can see the response curve for RED performance.

click to expand
Figure 8.19: Simple RED Queuing Response Curve

Figure 8.20 shows the queue before RED is turned on. With RED turned on, we get traffic shaping, and the queue looks like the image in Figure 8.21.

click to expand
Figure 8.20: Traffic to Be Queued

click to expand
Figure 8.21: Dropped Packet Percentages after RED Shaping

The main problem with RED is that it discards packets regardless of importance or any QoS standards. Figure 8.22 illustrates the problem with RED.

click to expand
Figure 8.22: The Problem with RED

An improvement on RED is seen with the use of a weighted algorithm. Packets can be discarded in inverse order of importance. Lowest-priority packets get discarded first, and highest-priority packets get discarded last (Figures 8.23 and 8.24).

click to expand
Figure 8.23: WRED Theory

click to expand
Figure 8.24: Priority Bits in IP Header

In Figure 8.24, we see that packets whose priority bits are set high get discarded much later, and that they have a lower probability of discard. In the event of reaching or exceeding maximum threshold, all packets get discarded, regardless of markings.

Several vendors have chosen to use the ToS/Priority field for WRED (Weighted Random Early Detection – RED with a weighted algorithm). In this case, if the priority field is marked 000, the packets are highly eligible for discard; when they are marked 111, they are least likely to be discarded.

In using RED or WRED, edge devices are able to manage bursty traffic even if it exceeds the bandwidth.

Other than queue management, how can QoS be handled? The bottom line is that QoS in an MPLS network can be treated like QoS is in an IP network. We can use over-provisioning, DiffServ, RSVP/IntServ, and queue management. The issue that confronts many implementers is a matter of how IP packets will relate to MPLS QoS. The problem lies in that fact that customers may or may not be able to manage QoS in their networks, i.e., they may have marked packets or they may not have marked packets.

Let’s look at the simplest implementation of QoS in an MPLS network – that is, packets that are not marked for QoS when they are delivered to the demark.

QoS in MPLS Without Markings

Customers may not be able to mark their packets for special treatment, but they may need to separate traffic that is bound for one destination by application type. For example, a production application may require a CIR (Committed Information Rate) treatment, where VoIP may require a VIR treatment, and HTTPS requires a higher priority than e-mail.

If the customer has not marked any packets, then the ingress LER can be set to map traffic according to port number. The traffic can than be mapped to a LSR. For example, LSRs can sustain traffic engineering for CIR, VBR (Variable Bit Rate), and UBR (Unspecified Bit Rate), and can be provisioned for 1-1, 3-1, and 20-1, respectively. This simple mapping of traffic to LSRs could be called L-LSR (Label-Based LSR) QoS.

Mapping MPLS to an FEC by port number or application is a very simplistic method of achieving some level of QoS (Figure 8.25) , but it does not solve the problems of unpredictability. Recalling that QoS entails marking, classifying, and policing traffic, we must ensure that we have instantaneous bandwidth. Mapping unmarked packets to an LSP does give some level of protection, but it does not fully address all the issues.

click to expand
Figure 8.25: QoS without Marked Packets

MPLS with Pre-Marked Packets

We learned when discussing theory that an Enterprise network could mark packets for QoS. The marking protocols are: 802.1Q/p markings, precedence/ToS bit markings, and DiffServ markings.

Figure 8.26 shows the traditional marking of precedence bits. These bits can be marked by the clients and used not only for WRED, but also for packet treatment within the network. The core running MPLS at Layer 2.5 does not see these precedence markings. In order to ensure that packets are afforded proper QoS treatment in the core, these bits must be mapped to the MPLS header.

click to expand
Figure 8.26: Precedence Bits Marked

One of the functions of an LER is to take these precedence bits and map them directly to the experimental (Exp) bits in the MPLS header. The Exp bits in the MPLS header can be read and interpreted by the core routers. A bit pattern of 000 could mean, “treat as best effort”, where a bit pattern of 111 could mean, “treat as highest priority”, “do not discard”, and so on.

Vendors have each implemented bit mapping differently. Some vendors map precedence bits to Exp bits by default, some vendors don’t map these bits at all, and still others allow for a ToS mask, wherein any combination of bits can be mapped.

click to expand
Figure 8.27: ToS and DiffServ bits relationship

Use of DIFFSERV Markings

Other customers may deliver packets that are marked with DiffServ instead of with precedence bits. In Figures 8.28 and 8.29, we see the relationship between the IPv4 ToS field and the DiffServ field.

click to expand
Figure 8.28: Precedence Bit Mapping

click to expand
Figure 8.29: ToS Bits Copied to Exp Bits

In Figures 8.27 and 8.28, we see that DiffServ is really two classifications of traffic: one classification is that of class; the second classification is that of drop precedence. If you stand in airport lines as much as I do, you can easily see this in action.

At the airport, there are lines to the counters – regular customers, frequent flyers, and first-class passengers. The queues vary in size, but a typical scenario is one in which the regular customer line is very long and the first-class line is short. The same is true with routers; the volume of high-priority traffic will be much lower than that of routine traffic.

So, we have established the classes of traffic, but what about drop precedence? You are standing in the routine line, but you get a tap on the shoulder, and you are asked to step out of line and use the rapid-ticketing line instead. Or, you are in the first-class line, but there is only one agent available to issue tickets – you see that the routine line is moving faster, so you drop out of the first-class line and go to the routine line instead.

What do you do when the line in which you’re standing line is too long, and you have a flight to catch? Packets experience the same issue. Routers can choose to keep a packet in line, drop it, or mark it as discard-eligible. This is the second part of the DiffServ field. The combination of class and drop precedence is expressed in a special notation, such as AF (assured forwarding): e.g., AF XY. X=class and Y= drop precedence, so the notation of AF 11 would mean, “class one, drop precedence 1”. In this DiffServ game of marking and processing, numbers are valued as they are in a golf game – which is to say that the better score is the lower score. For example, AF11 is better than AF 21.

In Figure 8.30, we see the details in action at the bit level. The eight bits that were used for the ToS field now become DiffServ code points (DSCP) and currently unused (CU) bits. In Figure 8.30, we see a further breakdown of the DSCP into two prominent sections: a class and a drop precedence (DP) field. In Figure 8.31, we see how these bits are mapped into class and drop precedence so that an AF11 is a bit pattern of “001 01 0 00”

click to expand
Figure 8.30: Detailed DiffServ Code Point Format

click to expand
Figure 8.31: Details of Bit Pattern for AF 11

We can only map 3 of the 6 available bits to the Exp field; many vendors have chosen to map the low-order three bits (or class bits) to the Exp field.

DiffServ assures that the tunnel has the required traffic policing characteristics; it marks, classifies, and polices. It cannot guarantee, however, that bandwidth is going to be available when you need it. DiffServ is DiffServ, and whether it is in IP or MPLS, it does not check for bandwidth before a call is placed.

In order to achieve instantaneous bandwidth from end to end, we need to add the RSVP protocol. RSVP checks for BW before a call is placed, and it continues to request bandwidth for ongoing messages or flows.

RSVP is a per-flow QoS process where DiffServ is a per-tunnel process. RSVP gives a great level of QoS control, but overhead increases with its implementation. You may choose to mix-and-match RSVP and DiffServ, or to use only one method. All of these decisions will hinge on the needs of your customer base.

In Figures 8.32–8.34, we see that achieving end-to-end QoS requires deploying several QoS methodologies: over-provisioning, queue management, DiffServ, and RSVP/IntServ.

click to expand
Figure 8.32: What Is Needed for End-to-End QoS?

click to expand
Figure 8.33: MPLS End-to-End QoS Process

click to expand
Figure 8.34: QoS per MPLS Elements



 < Day Day Up > 



Rick Gallagher's MPLS Training Guide. Building Multi-Protocol Label Switching Networks
Rick Gallahers MPLS Training Guide: Building Multi Protocol Label Switching Networks
ISBN: 1932266003
EAN: 2147483647
Year: 2003
Pages: 138

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net