MPLS Traffic Engineering and Guaranteed Bandwidth


Traffic engineering (TE) is one of the oldest arts in networking. It involves calculating and configuring paths through a network to use resources efficiently and provide the best traffic performance possible. RFC 2702 provides a useful definition:

A major goal of Internet Traffic Engineering is to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and traffic performance. Traffic Engineering has become an indispensable function in many large Autonomous Systems because of the high cost of network assets and the commercial and competitive nature of the Internet. These factors emphasize the need for maximal operational efficiency.

Going beyond the commercial and competitive nature of the Internet, some concrete, operational problems can be solved with TE. Here, we concentrate on three: link congestion, link protection, and load balancing:

  • Link congestion A well-known issue in IP networks is that interior gateway routing protocol (IGP) best paths may be overused while alternative paths are either underutilized or not used at all.

  • Link protection If a path or device failure occurs along a primary LSP, routing protocols have to rerun the full (or incremental) shortest-path first (SPF) calculation before traffic can be forwarded again, which can take several seconds.

  • Load balancing Standard IGPs allow traffic to be balanced equally across only equal-cost paths. Unequal paths are ignored because the IGP will see them as longer routes to the destination.

MPLS TE gives network operators a way to solve the problems described in the preceding list. Basically, TE calculates shortest paths through a network, but within a given set of constraints. Because of this, TE is said to use constraint-based routing (CBR). The following list discusses how MPLS TE offers a solution to link congestion and protection issues introduced previously:

  • Link congestion Network administrators can build tunnels across less-used paths and route traffic along them. With less traffic on them, previously congested paths can become decongested.

  • Load balancing An added benefit of the Cisco IOS TE implementation is that there are 16 hash buckets for paths to a single destination. The buckets are allocated according to bandwidth and thus provide a proportional load-balancing capability.

  • Link protection With fast reroute (FRR) link protection, MPLS TE enables you to preconfigure a backup LSP at any point along the tunnel path, which the traffic will use if a link failure occurs on the protected LSR.

MPLS TE supports the notion of priority and preemption. Low-priority tunnels can be removed to free up bandwidth for higher-priority tunnels. Again, TE and FRR are discussed in more detail in Appendix B.

DS-TE and Guaranteed Bandwidth

You might now wonder, given the many advantages of MPLS TE, whether you even need to worry about QoS anymore. TE alleviates the problem of congestion by allowing traffic to be routed across underutilized links. Furthermore, a TE tunnel is created if there are enough resources along its path to meet its bandwidth requirements. However, MPLS TE is blind to class. You cannot build tunnels for different categories of traffic, just for different destinations.

DS-TE was developed (RFCs 3564 and 4124) to allow MPLS TE to be aware of traffic classes. However, as RFC 4124 points out, DS-TE is more than the simple equivalent of DiffServ on MPLS tunnels. Notably, DS-TE supports the concepts of preemption and explicit overbooking, neither of which are part of the standard DiffServ model, but are useful to service providers wanting fine-grain control of their bandwidth allocation and who need to provide strict, absolute guarantees for their service level agreements. DS-TE adds the concept of classes to TE. Simply put, LSRs now advertise pools of bandwidth and the RSVP and IGP processes are modified to check that adding a new tunnel does not affect tunnels using other pools.

The terminology of RFC 4124 refers to Class-Types (CTs), not pools (which is a Cisco IOS implementation term). A CT is defined as the set of aggregated traffic flows belonging to one or more classes that are governed by the same bandwidth constraints. Link bandwidth is allocated on a CT basis.

DS-TE is implemented with IGP extensions that advertise the bandwidth per CT available on a link. TE constrained routing calculation is run on a per CT basis, and RSVP extensions allow reservation requests to also be made per CT.DS-TE, just like regular TE, is a control-plane reservation mechanism. You still need to use queuing and discard mechanisms to enforce the traffic classes on the data plane. Please refer to Appendix B for additional details concerning DS-TE and Guaranteed Bandwidth.

At this point, you might be wondering when to deploy guaranteed bandwidth. For example, can standard DiffServ already support voice? Yes, but it relies on every hop along an end-to-end path enforcing the correct PHB, which is only possible as long as there are enough resources along that path. DiffServ offers no way to guarantee that this will be the case. DS-TE, however, can guarantee resource availability and thus provides for strict service levels without overprovisioning. As the name suggests, DS-TE relies on DiffServ at each hop to enforce the correct PHB required for the chosen bandwidth-allocation model.

Do I Really Need This in an Enterprise Virutal Network?

The decision to use any of the enhanced QoS mechanisms that are enabled by MPLS TE is independent of the decision to virtualize a network. In other words, the reason for deploying TE, FRR, or whatever should be in support of a business problem, such as guaranteeing application response times, or network availability. Many such justifications are perfectly valid. If none of these apply to your network, then, even if MPLS is the right technology for path isolation, there is no reason to use it with the other services it offers.

The central design issue when it comes to QoS in the context of a VN is how to enforce existing policies with all the new protocols making their appearance in the distribution and core network layers. This problem requires using hierarchical QoS strategies.




Network Virtualization
Network Virtualization
ISBN: 1587052482
EAN: 2147483647
Year: 2006
Pages: 128

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net