Problem Statement


Because a high number of enterprises and service providers (SP) are considering IP/MPLS for their next-generation network (NGN) convergence, the expectation of an IP/MPLS network is high. We have often seen IP/MPLS networks compared to FR/ATM networks and FR/ATM QoS. Enterprises are used with the following bandwidth models:

  • Frame Relay committed information rate (CIR).

  • ATM constant bit rate (CBR); also referred to as guaranteed bit rate service.

  • ATM variable bit rate (VBR) (nonreal-time [NRT] and real-time [RT] for delivery of video services).

  • However, no such bandwidth or bounded delay services are possible in a plain IP network. MPLS and QoS can certainly help mimic the FR/ATM QoS behavior, though.

Enterprises commonly use ATM and Frame Relay as access circuits into provider networks, even for IP access. They have an expectation of peak and sustained information rates based on the connection-oriented nature of Frame Relay or the ATM network. MPLS has label-switched paths (LSP), so it is often wrongly assumed that MPLS brings the connection-oriented nature of the circuit-switched world to the IP network. Although this might be true because packets are always sent along a designated LSP, no one-to-one relationship exists between the sender and the receiver for an LSP in packet networks. In fact, at the receiving end, packets can come in from any source along the same LSP. This notion is referred to as the multipoint-to-point capability of MPLS. Due to the lack of a one-to-one relationship between the source and destination, not all the connection-oriented QoS models can be applied. However, enough similarities exist between traditional ATM and Frame Relay networks and MPLS networks to pick up a number of components and apply them in the MPLS network. Packet loss is also an important component of service delivery and SLAs. Although packet loss can be tackled on an end-to-end basis using retransmission for TCP, it cannot be handled with the User Datagram Protocol (UDP). Packet loss can occur due to network changes. Although some failures, such as link or node failures, are out of the operator's control, network nodes must not drop packets in the queues unless by design (for instance, dropping low-priority traffic). As an example, hardware must be capable of guaranteeing the delivery of the highest-priority traffic no matter how congested the low-priority queues are. Packet loss within a network node is a function of hardware design and the implementation of queuing, policing, and other QoS functions.

Voice and video traffic needs bandwidth, delay, and jitter guarantees from the network in addition to a low packet loss guarantee. Because of the nondeterministic behavior of packet networks, the QoS guarantees provided are not the same as those provided by circuit-switched networks. IP QoS models, such as DiffServ, provide a method of packet delivery that allows the prioritization of sensitive traffic. This DiffServ QoS model is widely deployed by service providers and enterprises. Given the deployment of DiffServ IP QoS, the constrained based shortest path first (CSPF) capability of MPLS traffic engineering, and the admission control, we must consider whether we can build a model that combines these elements to deliver better QoS than a plain old IP network does and that can mimic as closely as possible the behavior of the circuit-based QoS model?

IP QoS

To address the problem statement, let us first try to understand what IP QoS can and cannot offer and understand some basic building blocks of QoS.

QoS Building Blocks

QoS has a foundation of basic building blocks that allow traffic characterization or classification, policing, queuing and random discard, scheduling, and transmission. Each of these building blocks plays a vital role in implementing QoS in IP networks.

  • Traffic classification and marking To provide the right QoS behavior for applications, traffic needs to be classified. Traffic classification simply means identifying traffic types for treatment in the network. Traffic can be classified based on any criteria. A simple criterion is by source and destination address; other criteria could be the protocol type or the application type. A third one could be traffic marking, and a fourth one could be by deep packet inspection and the identification of payload types, such as web URLs, transactions, interactive gaming, and so on. After the traffic is classified, it is marked for appropriate treatment in the network. The marking is done by setting the DiffServ field or the IP type of service field in the IP header.

  • Policing Traffic policing needs to be done to identify whether the incoming traffic is in contract or out of contract. Traffic is supposed to be in contract if the user is sending traffic at the specified interval and at a specified rate (and not exceeding that rate and frequency). A policer determines whether too much traffic is coming in and sets up traffic for transmission or discard. For example, traffic can be policed and out-of-contract traffic can be re-marked with a different (lower-grade) QoS label for best effort service delivery so that if congestion occurs, the out-of-contract traffic can be dropped. If the operator does not police, it does not know whether the links are oversubscribed. Policing is key to determining the actual over-subscription factor. For better traffic control, policing can be selectively applied to various QoS classes to meet specific QoS delivery targets or traffic contracts.

  • Queuing and random discard When the incoming traffic rate is greater than the outgoing traffic rate, the traffic must be queued; otherwise, it is discarded. Traffic can be queued based on individual flows or based on some aggregate QoS groups or classes. For example, 100 voice flows can be queued separately. This results in 100 queues where each queue can be serviced fairly or 100 flows can all be queued into the same class queue, and the class queue can be serviced at an aggregate level with the highest priority. By queuing flows separately, isolation is achieved between flows, so one flow can be prevented from potentially hogging the bandwidth of other flows. Flows cannot hog bandwidth due to standard codecs, but it is indeed true for voice, video, mission-critical data, and best-effort data. A video flow can hog bandwidth and starve the ERP traffic or voice traffic if they are both lined up in the same queue. However, in per-flow queuing, when you have a large number of flows, you need a large number of queues and some weighted fair queuing mechanismwhich could be a scale issue. The method most frequently recommended is to queue voice packets in the highest-priority queue and place video and ERP traffic in separate queues to provide isolation and maintain QoS delivery guarantees. Random discard can be applied on a queue when the queue builds up due to a high incoming rate of traffic. Random discard or weighted random early discard (WRED) can be done on a queue to discard lower-priority packets or out-of-contract packets from the queue; with this technique, you can avoid losing the higher-priority packets. Assuming the flows use Transmission Control Protocol (TCP), they can be recovered by retransmitting them even if some packets are lost. For a detailed explanation of random early drop (RED) and the effect it has on the network, see the [RED], RFC 2597 and Figure 9-1.

    Figure 9-1. Queuing and RED

  • Scheduler Queues can be serviced at a specified rate. If all queues are serviced fairly, this is called weighted fair queuing (WFQ), meaning queues are serviced in a fair manner such that equal amounts of data are transmitted from each queue in one cycle. The queues can be weighted to provide a bias for the high-priority traffic. If class-based queuing is done, each class can be serviced at a specified rate to provide fairness to all traffic. Alternatively, a strict priority scheduler is one in which all traffic in a queue is serviced first until no packets remain in the queue. Only then are other queues served. If the priority queue always has packets to send, other queues could get starved.

  • Transmission Packet transmission on the wire is also an important factor. For example, voice packets are small (usually 64 bytes), whereas data packets could be large. Especially on low-speed links, where the serialization delay is large, voice packets becoming stuck behind the large data packets can affect the link delay budgets and ultimately the voice quality. You might want to fragment larger packets into smaller chunks and interleave the voice packets to deal with link delay budgets. These serialization delays don't have any effect on high-speed links. The serialization delay is most pronounced in sub-T1/E1 rates or data rates of 768 Kbps or less.

The building blocks previously described are used in any QoS model, whether it is signaled QoS (specific source signals for QoS) or provisioned QoS (manually preprovisioned by the operator).

By using the previously described building blocks, IP networks can provide a statistical guarantee for the traffic. Statistical guarantee refers to a specified delivery of data for a certain percent in timefor instance, a net data rate of X Kbps 98 percent of the time. In contrast, ATM networks can deliver an absolute guarantee by delivering a data rate of X Kbps 100 percent of the time using CBR services.

IETF has developed two main models for delivering QoS. Both use the basic QoS building blocks of queuing, policing, and the discard mechanisms to deliver QoS. The first model developed is a per-flow QoS model known as Integrated Services (IntServ). Because of scalability issues with per-flow models, IETF also developed an aggregate model called DiffServ. Each of these models classifies the incoming traffic, polices it if necessary, queues it and applies WRED, and schedules the traffic on the wire. However, the differences lie in the granularity, or the amount of state stored in each of these models. See Figure 9-2 for a quick comparison of IntServ and DiffServ.

Figure 9-2. Integrated Services Versus Differentiated Services


By using these IP QoS models, traffic can be prioritized and delivered to the destination. The next section discusses these QoS models in a bit more detail.

IntServ

As we mentioned earlier, IntServ is a per-flow QoS model. In this model, reservation is made for every flow that is classified by the five tupple (source address, destination address, source port, destination port, and an IP TOS marking, policed for traffic contract, and placed into its own queue. Another important element of IntServ is signaling and admission control.

IntServ uses RSVP signaling for setting up flow information along the path. RSVP path messages are sent from the source to the destination. At each hop along the path, the flow state is initialized. When the PATH message reaches the destination, the destination device can decide to accept the reservation and send a RESV message back to the sender along the same path. Admission control is done at each hop to see whether bandwidth exists on the link and queue space is available on the node. The RESV message travels hop-by-hop back to the source. At each hop, a policer is set up to police the flow for the bandwidth, a queue is allocated, and admission control is performed.

IntServ is a great model for per-flow QoS and is the only model that provides admission control on a per-flow basis. However, at the core of the network are thousands and maybe even hundreds of thousands of flows. Per-flow QoS therefore doesn't scale well at the core of the network due to the amount of state that needs to be maintained for these hundreds of thousands of flows. Aggregation of flows is needed to scale the IntServ model to a large number of flows. This can be achieved by either creating fat reservations that aggregate individual flow reservations or using the traffic engineering tunnels to aggregate the individual RSVP flows. Another form of aggregation could be done by just queuing flows based on classes but still performing admission control on a per-flow basis. We explore this option a bit more in subsequent sections. The admission control capability of RSVP makes this protocol useful in VoIP QoS. It provides a feedback mechanism to voice signaling about availability of QoS from the network when voice calls are set up, rather than transmitting packets only to realize that the network did not have enough capacity. Plus, it breaks down not only the new flow, but also existing flows.

DiffServ

In contrast to IntServ, DiffServ is more coarsely grained and aggregate-based and thus far more scalable. Here traffic is classified into traffic classes, and all traffic grouped into a class requires the same treatment from the network. Each class represents a per-hop behavior (PHB) that can be distinctly identified from other classes. This QoS behavior between classes can vary on any QoS parameter. For example, all voice traffic can be classified into an expedited forwarding (EF) DiffServ class, or all bandwidth-guaranteed data can be classified as an assured forwarding (AF) class. Both EF and AF classes are defined by the IETF DiffServ standard. EF class means data must be forwarded through the node in an "expedited" manner. This class is characterized by low delay and low jitter. Similarly, an AF class is characterized by a bounded delay and is bandwidth guaranteed. For more details on DiffServ, read the IETF RFC 2474, RFC 2475, and RFC 2430.

Packet Handling

As mentioned earlier, a per-hop behavior is characterized by QoS parameters, such as bandwidth, delay, jitter, and packet loss. To achieve these characteristics, the basic QoS building blocks can be arranged in such a way that results in the desired behavior. Traffic must be classified, marked, policed for over-subscription, queued, and scheduled for transmission. By adjusting the service ratios of the queues, a desired bandwidth partition can be achieved for that class of traffic.

The Hybrid Model

DiffServ addresses the problem of scalability over IntServ by aggregating flows. However, DiffServ does not have a key component of IntServ, which is admission control. Admission control is key in controlling the over-subscription. By policing, you can determine only the current state of the network. In a DiffServ model, if the traffic is over-subscribed, it is just dropped when congestion occurs. There is no feedback mechanism to the end user whether the traffic is going through or not. For example, in a voice call model, say the eleventh caller comes in on a link with a capacity of ten calls. Without admission control, the eleventh call might be admitted and could result in degrading the quality of all calls currently in progress.

Using the RSVP signaling of IntServ can provide a means of feedback to the voice gateways and end points that no more capacity exists in the network and that the caller should try the call at a later time.

A hybrid model uses the RSVP signaling for the admission control and the feedback mechanism while maintaining aggregate information in the core of the network. For instance, admission control can be done on the EF class queue with a specified bandwidth for voice calls, and the scheduling can be done based on DiffServ, thereby scaling the QoS model extremely well. MPLS adds some variations to these IP QoS models; they are discussed in the paragraphs that follow.

Other methods of call admission control can be overlaid on the network for voice and video. The most common method is call counting, applied on the voice gateway. It is a simpler method because you preprovision bandwidth and restrict the number of calls on any given link. For simpler topologies, such as hub and spoke networks with predictable traffic patterns, the call counting method works well. However, for large mesh networks, the call counting method does not work at all. The network state, and hence available bandwidth, can change at any given time. The most accurate call admission control is based on network resource status, network-based admission control necessary. Flow-based admission control schemes have appeared recently. However, flow-based admission control schemes do not provide any feedback to the end user/station or call (voice/video) terminal about the acceptance or rejection of the call. A signaling protocol, in contrast, can provide that feedback, distinguishing the line busy condition from the end user busy condition.

MPLS QoS

Because MPLS uses an IP network with IP routing protocols, it also uses the same IP QoS models. MPLS QoS does not change the IP DiffServ model of traffic classification, marking, policing, queuing, scheduling, and transmission. However, the IntServ model of IP QoS is different in MPLS networks.

MPLS DiffServ

MPLS DiffServ is similar to IP DiffServ. In MPLS DiffServ, packets are marked with the EXP field instead of the IP TOS/DSCP byte of the IP header. Packets are queued based on the EXP marking, and WRED is applied on that marking. Other basic building blocks remain the same as in IP DiffServ. The IP header byte contains 6 bits for DSCP marking and MPLS labels have only 3 EXP bits; therefore, the number of classes in IP DiffServ can be 26 = 64. In MPLS based on EXP bits, the number of classes can be only 23 = 8. Mapping IP DiffServ classes to MPLS DiffServ classes can be straightforward. However, the MPLS EXP cannot accommodate more than 8 MPLS DiffServ classes because it does not have enough bits. In this case, the IETF RFC states that label values with the EXP field can be treated together as a class of traffic. For example, 8 LSPs with 8 classes of traffic each can be signaled to provide 64 classes of service. The label inferred class of service allows unique LSPs to be set up that can carry a specified class of traffic. These are commonly referred to as L-LSPs. A single LSP that carries multiple EXP markings of traffic is commonly referred to as E-LSP. Mapping IP DSCP can be done in many ways, with one-to-one or many-to-one mapping. Table 9-1 shows a simple IP CoS/ToS (class of service/type of service field) mapping.

Table 9-1. IP Type of Service to MPLS Class of Service Mapping

IP ToS

MPLS EXP

Comment

7

7

Control and management

6

6

Voice

5

5

Video

4

4

DataBusiness-critical

3

3

Data

2

2

DataE-mail, bulk

1

1

DataBetter than best effort

0

0

Best effortWeb


Another example of label-inferred class of service is shown in Table 9-2.

Table 9-2. Label-Inferred Class of Service

IP DSCP

MPLS EXP

MPLS Label

Comment

07

07

10

Lowest class

815

07

20

Grade 1

1623

07

30

Grade 2

2431

07

40

Grade 3

3239

07

50

Grade 4

4047

07

60

Grade 5

4855

07

70

Grade 6

5663

07

80

Grade 7Highest grade


Table 9-2 shows some sample buckets of IP DiffServ classes with MPLS DiffServ. MPLS EXP 5 in Grade 3 is differentiated from MPLS EXP 5 in Grade 4 by looking at the labels of 40 and 50. In this example, the queuing must be done based on label classification. Each LSP is called an L-LSP with label-inferred class of service.

Using MPLS DiffServ has some notable advantages. One is that IP DSCP or ToS information can be copied onto the MPLS label header or the MPLS label header can be independently set irrespective of IP ToS/DSCP value. By default, Cisco's devices copy the information, which is referred to as uniform mode QoS.

If the MPLS label header is independently set rather than being copied from the ToS/DSCP byte, then depending on the configuration, the IP header information can be retained as is. In this manner, IP QoS is tunneled through the MPLS network; this is referred to as tunnel mode QoS. Tunnel mode QoS is important for the following reasons:

  • For an unmanaged service, IP QoS values from the customer might or might not be trusted. The provider has two options: either rewrite the IP header with a new QoS value or tunnel the IP QoS through the MPLS QoS. By using independent classification and marking of MPLS packets, the SP avoids any trust issues and is in control of the network.

  • The customer packet marking might or might not coincide with the provider markings. For example, a customer might mark all voice packets with ToS 5, whereas the provider might mark the highest grade of service with a 4 and hold the 5 marking for management traffic. Moreover, the customer might like to maintain the QoS values because their local area network (LAN) infrastructure is configured to accommodate the QoS values. In this case, the customer would look for tunneling of its QoS value through the MPLS network.

Tunnel mode QoS is important for CoS transparency in single or multiple networks. Even if the service spans a single or multiple AS with tunnel mode QoS, the transparency can be maintained. The trick is in not copying the QoS value at the ingress PE and not recopying it back from the MPLS header to the IP packet at the penultimate hop (if PHP is used) or at the egress PE (if the ultimate hop or explicit NULL label is used).

Traffic Engineering and DiffServ

As discussed in detail in Chapter 8, "Traffic Engineering," traffic engineering by design is a control plane function and does bandwidth accounting only in the control plane. Traffic engineering keeps track of where tunnels are placed on different links/paths in the network, and DiffServ ensures that traffic receives priority while traversing each of those paths. In other words, DiffServ can be used for appropriate traffic prioritization, whereas traffic engineering places the traffic on different paths. These two independent functions can be used simultaneously in the same network, and together they can be useful in offering a better SLA. For example, delay-sensitive traffic, such as voice and video, can be routed over traffic engineering tunnels that are set up on low delay paths, with voice and data being prioritized differently.

TE tunnels can be set up with constraints, such as bandwidth, delay, and speed of the links. All links can be configured with DiffServ behavior so that packets are queued and scheduled according to MPLS EXP marking. The queuing and scheduling provides priority to packets with higher markings, whereas traffic-engineered LSP provides a steered path through the network. This combination of DiffServ and MPLS traffic engineering is sufficient for most networks to provide QoS-based services. Because the TE tunnels are set up at an aggregate level and through the network core, the scale issues here are similar to MPLS traffic engineering. This means, for all practical purposes, no scale issues exist.

DiffServ-Aware Traffic Engineering

This is different from TE and DiffServ. In the previous section, you saw that DiffServ and TE can be used independently in the network simultaneously. This application of QoS is the reverse of the previous case. In the previous case, the paths were set up by TE tunnels and traffic flowing through those tunnels was marked with MPLS EXP and queued along the path. In this case, each TE tunnel is set up with a stricter constraint for a class of service with a tightly defined bandwidth pool. Link bandwidth is divided into two or more bandwidth pools. MPLS TE then sets up TE tunnels, taking into account each of those bandwidth pools, and does admission control against multiple pools of bandwidth. For a fuller explanation, let us consider an example in detail.

Assume an operator has voice, video, and data traffic to send. The operator would like to use the lowest delay links for voice and the highest bandwidth links for video. Moreover, to avoid nondeterministic behavior, the operator decides to limit each type of traffic carried over any given link such that over-subscription of voice and video is minimal and over-subscription of data is high. In addition, by limiting the amount of each class of traffic on a link, delay and jitter are kept within design bounds for the desired SLA.

To partition each link, the operator has to configure multiple subpools on the link. Assuming a link of capacity of X Mbps, the subpool can be a fraction of X, such as 25 percent or 33 percent of X Mbps. Now this subpool information is flooded in the IGP in the same way that available bandwidth is flooded in the traffic engineering case.

When a traffic engineering application sets up the TE tunnel, the tunnel is specified to be DiffServ-aware by associating a class/subpool with it. This implies that the admission control and bandwidth accounting are done on the subpool of the links and not on the global pool of bandwidth on the link. By setting up tunnels this way, the maximum number of high-priority tunnels can be capped to the subpool bandwidth on any given link. If no subpool information is specified at tunnel setup, then by default the TE application uses the global bandwidth pool for admission control.

Bandwidth pools can be exclusive of each other, or pools can be stacked on each other. The stacked model is referred to as the Russian doll model, and the exclusive model is referred to as the maximum allocation model (MAM). In the MAM, the pools are independent of each other and are static with clearly defined boundaries. For example, in the MAM, a link of 10 Mbps can be partitioned in the following way:

  • 3 Mbps for voice traffic

  • 2 Mbps for video

  • 3 Mbps for business-critical traffic

  • 2 Mbps for the rest of the data

When you have no traffic to send in the higher-priority poolsin this case voicethe pool bandwidth cannot be used to set up video or data tunnels for business-critical traffic. In other words, the link bandwidth is "hard" partitioned.

In the Russian doll model of bandwidth allocation, the pools are stacked on each other. Here is the same example with the Russian doll model:

  • 3 Mbps for voice traffic

  • 2 Mbps for video

  • 3 Mbps for business-critical data

  • 2 Mbps for the rest of data

Notice that the configuration is exactly the same as in MAM. However, there is a key difference: Assume there is only 1 Mbps of voice traffic and only one voice tunnel is set up on the link with 1 Mbps. Also assume you have more than 2 Mbps of video traffic to send. If there is enough databoth business-critical and regularto take up the next 5 Mbps of bandwidth, the video tunnels can be set up taking up the bandwidth beyond the 2 Mbps limit and utilize the unused voice bandwidth. This conforms to the following in actual practice:

  • 3 Mbps of voice

  • Up to 5 Mbps of video traffic (it is only 2 Mbps of video if you have a full 3 Mbps of voice to send)

  • Up to 8 Mbps of business-critical data (it is only 3 Mbps if you have a full 3 Mbps of voice and 2 Mbps of video)

  • Up to full 10 Mbps of regular data (it is only 2 Mbps if you have voice, video, and business-critical data using its full quota)

Any bandwidth model can be made to work. Operators must choose based on which model best suits their operational needs. Operators can also choose to use preemption to preempt lower-priority tunnels. More details on each of these models and each model's pros and cons and variations can be found in IETF RFC 4128.

DiffServ-aware TE is a powerful technique that can be used when tight SLA guarantees are required from the network. However, DiffServ-aware TE is used at the expense of operation complexity. If such tight guarantees and tighter network control are unnecessary, just DiffServ or DiffServ overlay with TE might be sufficient.

MPLS QoS Service Examples

Among Cisco's customers, a great majority have QoS deployed in their networks. Some of them are fairly simple models, such as access network guarantee only; others are very complicated with SLA measurements on delay jitter and packet loss end-to-end.

Here are some examples of QoS-based services that SPs offer.

Point-to-Cloud Model

In this model, the assumption is that the network core has lots of bandwidth. The only bottleneck in this case is the access link. Usually, Frame Relay or ATM is the access mechanism with CIR or SCR guarantees. The network core is over-provisioned with a lot of bandwidth and has no problem handling 2x or 3x of sum of access bandwidth.

The selling model in this QoS-based service is the same as the Frame Relay model. However, the attraction is this: instead of multiple Frame Relay VCs, only a single VC is needed from the site to the PE device. The same Frame Relay characteristics can be applied to the access circuit. This model is embraced by several carriers today and is called IP-enabled Frame Relay. The end user buys a single VC with a CIR to the provider network. Because the VC is not passing through the provider cloud across to the other site, this is also referred to as the point-to-cloud QoS model.

Olympic Service Model

The Olympic service model is a simple model with three classes: gold, silver, and bronze. Gold is of course the highest priority, with silver next and bronze last. Gold service is meant for higher-priority applications such as voice or video. It usually has a distinct bandwidth guarantee and a delay bound. Gold traffic is marked with either a 4 or 5 in the MPLS EXP and IP ToS field and is priority or low latency queued. Similarly, silver is marked with a 3 or 2 in the MPLS EXP and IP ToS field and usually has a bandwidth bound associated with it and no delay or jitter guarantees. Bronze is either best effort or is marked by a loose bandwidth bound if it isn't best effort.

This model is simple to provision and is well understood. The numbers of classes are small, so a distinct demarcation line exists between the various classes. The offered SLA is proven by either packet counters or other probes that deliver bandwidth and delay information.

Traffic-Engineered Voice Model

This QoS model uses both MPLS TE and DiffServ and was described previously in the chapter in the section "Traffic Engineering and DiffServ." The MPLS TE is used only for voice traffic to map it coming from the voice gateways into the TE tunnels. The rest of the traffic is sent using DiffServ mechanisms along the shortest paths.

The voice tunnels are set up with bandwidth guarantees on low delay paths. Traffic is mapped using static routing, policy routing, or CoS values onto the TE tunnel. For example, if all the traffic is marked with QoS 5, all the traffic marked as 5 can be mapped to a tunnel.

This model can be layered with the Olympic QoS model and is usually provided as a value-add for better handling of voice traffic.

Virtual Leased Line

A virtual leased line carries data link frames across packet networks. As described in the Layer 2 VPN section, the most important characteristic of virtual leased line is its bandwidth guarantees. By combining AToM functionality with QoS and the ability to explicitly map an L2 circuit to a specific TE tunnel, a bandwidth guarantee can be obtained.

On-Demand QoS

This is a variation on the previously described models. Usually, one of the basic models described in previous sections is used for the provisioning of QoS. However, in this model the user experience of QoS is on demand. The demand and response times vary with providers. In one model, a service provider uses a web portal to request the bandwidth requirements of users, validates them, and then provisions the back end (routers, queues, threshold values, and bandwidth parameters) automatically by adjusting the QoS parameters on the CE and the PE devices.

Another technique used by providers is to set up TE tunnels for the requested period of time between sites to carry the traffic for which QoS is requested on demand. After the demand subsides, the TE tunnel is cleared. For example, an enterprise needs a lot of bandwidth between its headquarters site and backup site at night for data backup. This enterprise might request bandwidth between these two sites only at backup time. Either through the portal method or by contract, the enterprise customer informs the provider of its needs. Then, at the scheduled time, the TE tunnel can be initiated to carry the backup traffic.

Another variation of this model is combining IntServ for explicit admission control with an MPLS network. This is discussed in the next section.

MPLS and IntServ

If IntServ flows are used for bandwidth reservation, then in the core of the MPLS network, these flows can be mapped to either MPLS DiffServ or MPLS TE tunnels. If no QoS configuration is used, the network core has no knowledge of IntServ packets and these IntServ packets are treated as normal IP packets and are label-switched as any other IP packet.

Traffic Flows to MPLS DiffServ Mapping

IntServ flows can be mapped to MPLS DiffServ at the edge. The core MPLS NGN has the aggregate-based DiffServ configuration with MPLS CoS. At the edge router, admission control of the IntServ flows is done on the available DiffServ class queue bandwidth. For instance, on the edge router, if the designated bandwidth for EF traffic is X Kbps, all flows mapped to EF (voice flows) are checked against the available EF bandwidth (available EF bandwidth = X total bandwidth of the admitted flows) and queued into the EF queue (low-latency queue). This allows admission control on a per-flow basis with aggregation of flows into fewer DiffServ class queues. This scales well because the core routers no longer maintain any flow information; they maintain only queues based on aggregate MPLS CoS.

Tunnel-Based Admission Control

In this model, MPLS TE with DiffServ or the DiffServ-aware TE model is used in the core of the network. TE tunnels are set up to carry voice traffic only. Voice gateways or end points use classic RSVP or IntServ for bandwidth reservation of individual flows. When an individual RSVP flow is signaled at the PE, the admission control is done on the tunnel bandwidth and the flow is admitted or rejected based on the available bandwidth of the tunnel rather than the interface bandwidth. Using IntServ gives you the ability to provide feedback about the QoS reservation to the voice end point/gateway. This feedback is now more accurate because it is based on the specific tunnel bandwidth.

TE tunnels can be resized or expanded should more voice calls come in. New tunnels can also be spawned to accommodate more traffic. These techniques of call control are out of the scope of this book. Decision makers just need to understand that complicated techniques can be applied for some finetuning of the network to deliver stringent QoS to the users.




MPLS and Next-Generation Networks(c) Foundations for NGN and Enterprise Virtualization
MPLS and Next-Generation Networks: Foundations for NGN and Enterprise Virtualization
ISBN: 1587201208
EAN: 2147483647
Year: 2006
Pages: 162

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net