8.2 Traffic engineering with policy and QoS constraints

8.2 Traffic engineering with policy and QoS constraints

If a network does not provide adequate delivery, customers will be unhappy and will press for price reductions or compensation or possibly switch to a new provider. At the most basic level the provider must provision a number of circuits with adequate capacity and sufficient geographic coverage to meet customer needs. Once the physical topology exists, the job of controlling the traffic over that topology to maximize efficiency must be tackled. Once we have control, users can be offered service quality and delivery guarantees. Clearly, we cannot build a successful QoS model without first deploying mechanisms to engineer traffic so that we can shape and direct individual traffic flows at will.

8.2.1 Traffic engineering

Traffic engineering is the process of mapping and managing traffic flows over a physical network infrastructure, with the aim of optimizing the use of that infrastructure. To date, this mapping has been handled in a fairly crude manner, with the emphasis placed on connectivity and a best-effort packet delivery service. Traffic flows are still largely controlled by unicast Interior Gateway Protocols (IGPs) and a mixture of standard and proprietary queuing features implemented in access devices. Planners who have implemented an ATM core into their design have access to better traffic engineering capabilities than legacy backbones, but this does not solve the end-to-end QoS problem in large heterogeneous internetworks [1].

Measuring traffic on backbones

If you are designing an entirely new backbone, the traffic data available will be a mixture of theoretical projections and the results of any empirical testing [1]). Traffic engineering can, therefore, start even before the network is installed, through a mixture of good design, simulation modeling, and pilot testing. If the backbone is already installed, it is important to baseline the traffic dynamics before attempting to engineer traffic flows, starting by establishing how much traffic is passing through the backbone ingress and egress points. This is vital for estimating future growth and capacity expansion plans. Traffic statistics provide the raw data for traffic engineering, enabling a designer to plan and optimize circuit provisioning.

Building a traffic matrix from an ATM core is quite straightforward, since ATM switches provide detailed per PVC statistics. When analyzed over time, these statistics provide the designer with good visibility about which circuits are underutilized and which circuits are experiencing growth. Each PVC can be provisioned to support specific traffic engineering requirements. If a path begins to suffer frequent congestion problems, it can be reconfigured accordingly. ATM is not without its drawbacks, however (such as its limitations in multicast support and the continuing debate over fixed cells versus frames for data traffic). Traffic engineering solutions cannot rely on ATM; they must be independent of the underlying infrastructure.

Building a traffic matrix with a routed core is more difficult, due to lack of accuracy and lack of granularity. For example, studies show that as much as 99 percent of the routing information in the current Internet is inaccurate [17]. Furthermore, traffic statistics maintained on backbone trunks do not typically differentiate traffic that is either entering or exiting a PoP from traffic that is transiting that PoP. You could improve matters by sampling traffic over time (say by capturing 1 out of every 100 packets), with the intention of capturing a statistically significant portion, but in practice this may be difficult to accomplish on high-speed trunks (running at OC-48 or even higher rates).

Network planners, therefore, require traffic engineering support capabilities that are similar if not better than those provided by ATM but with the flexibility of a routing overlay network. Ideally any emerging solution should combine the advantages of both while eliminating any disadvantages. This has led to the development of a relatively new technology called MultiProtocol Label Switching (MPLS). MPLS is proposed as the solution to underpin traffic engineering in large service-provider networks and is described later in this section.

Problems with least cost routing

Interior gateway protocols (such as OSPF) are essentially all opportunistic. They rely on shortest path forwarding techniques, creating a single least-cost path per source-destination network pair; paths are typically optimized for a single arbitrary metric, administrative weight, or hop count. While this approach has served the industry well for many years, it does not optimize resources, leading to scalability problems, unevenly distributed traffic, congestion, and gross bandwidth waste. Simple routing metric schemes do not provide sufficient granularity or flexibility for optimizing individual traffic flows over large, meshed heterogeneous internetworks. Routers and circuits that are in the shortest path often become congested, while resources on longer (but equally viable) paths remain underutilized. To combat this problem, some IGP protocols provide a form of load splitting over multiple paths (e.g., OSPF Equal-Cost Multi-Path [18], IS-IS [19], and EIGRP's load sharing over mixed speed circuits). These features are, however, still inadequate for load optimization on backbone networks and add complexity (e.g., load splitting may result in packets delivered out of sequence, and the designer is often required to engineer metrics to force paths to be used). Aside from these very basic traffic engineering problems, conventional routing protocols cannot hope to support the fundamental requirements of a QoS infrastructure—namely, differentiated flow handling.

The way forward

We have arrived at a point where this rigid best-effort approach is no longer suitable for business needs. The boom-and-bust nature of the resulting traffic dynamics has meant that many service providers have mitigated congestion by simply overprovisioning bandwidth. On a large network with expensive high-speed trunk circuits this typically results in huge cost and bandwidth inefficiencies. We need a new way of engineering traffic to maximize the bandwidth available by smoothing out uneven network utilization dynamically. The offered load must be handled in a more deterministic manner, and a number of well-defined service profiles should be available to meet the requirements of different applications (e.g., interactive, batch, video, voice, etc.). Consequently, traffic engineering is currently one of the hottest topics for service providers and standards bodies such as the IETF. Traffic engineering offers the flexibility to shift traffic flows away from the shortest path onto longer but less congested paths. A number of important techniques have emerged in recent years to assist in this process, as follows:

  • QoS-Based Routing (QBR) improves on traditional routing by recognizing and responding dynamically to multiple service-related constraints rather than simple metrics. QBR classifies flows and then directs traffic along forwarding paths that can meet QoS requirements.

  • Policy-Based Routing (PBR) improves on traditional routing by assigning forwarding paths to flows based on administrative policy rather than simple metrics. This offers administrators direct control over the forwarding paths selected.

  • Constraint-Based Routing (CBR) is a combination of QBR and PBR. CBR automates the traffic engineering process, helps to avoid congestion, and provides graceful performance degradation where congestion is unavoidable. The problems introduced by optimizing multiple constraints are, however, daunting.

  • MultiProtocol Label Switching (MPLS) is a forwarding scheme that combines high-speed switching techniques with traditional routing intelligence. Labels are inserted into traffic flows to enable intermediate devices to switch traffic quickly without having to consult routing tables. MPLS provides the granularity and performance required to handle flow-based traffic. MPLS adds scalability and is proposed as a technology that underpins QBR, PBR, and CBR in wide area backbones.

We will now discuss each briefly in turn.

8.2.2 QoS-Based Routing (QBR)

QoS-Based Routing (QBR) is a forwarding mechanism used to find paths that have a high probability of meeting the request service quality for flows (or aggregate flows). QBR does not include mechanisms for reserving resources; consequently, it is generally used in conjunction with a resource reservation function (such as RSVP). QoS-based routing is defined in [2] as a routing mechanism under which paths for flows are determined based on some knowledge of resource availability in the network as well as the QoS requirement of the flows.

Simple forms of QoS-based routing, based on IP's Type of Service (ToS) field, have been proposed in the past. For example, with OSPF, a different shortest-path tree can be computed for each of the eight ToS values in the IP header. Such mechanisms can be used to select specially provisioned paths but do not completely assure that resources will not be overbooked along the path. As long as strict resource management and control are not required, mechanisms such as ToS-based routing are useful for separating whole classes of traffic over multiple routes (e.g., this might work well with the emerging differential services initiative). The downside of this approach is that it consumes significant routing resources (e.g., the Dijkstra algorithm must be run for every shortest path tree).

A number of different QoS-based routing models have been proposed in recent years (for both unicast and multicast routing), each designed to address different types of problems. These schemes often make different assumptions about the state of the network and rarely work together. The IETF has developed a common framework [2], which can accommodate different kinds of algorithms. This framework offers a hierarchical model with two levels: intradomain QoS-based routing and interdomain QoS-based routing. This model is compatible with the routing hierarchy discussed previously, which has the concept of Autonomous System (AS) as the highest-order entity. Under QoS-based routing, the path assigned to a flow would be determined based on the QoS requirements for that flow, as well as knowledge of resource availability in the network. The main objectives of QoS-based routing are as follows:

  • Dynamic determination of feasible paths—QoS-based routing should identify a path that has a good chance of accommodating the QoS requirements for a given flow. Feasible path selection may be subject to policy constraints (such as path cost, provider selection, etc.).

  • Optimization of resource use—A network state-dependent QoS-based routing scheme can assist in the efficient utilization of network resources by improving total network throughput. Such a routing scheme can be the basis for efficient network engineering.

  • Graceful performance degradation—State-dependent routing can compensate for transient inadequacies in network engineering (e.g., localized congestion conditions), providing better throughput and a more graceful performance degradation as compared to a state-insensitive routing scheme.

In a large heterogeneous multivendor internetwork such as the Internet, however, QBR raises a number of serious issues because of the lack of integration or any centralized administration and because of the scalability and performance problems that occur [2]. Many of these problems are common to Constraint-Based Routing (CBR) issues. For the interested reader [20] provides comprehensive coverage of the issues involved in QoS routing, including formal analysis of key algorithms and heuristics used for QoS routing solutions.

8.2.3 Policy-Based Routing (PBR)

Policy-Based Routing (PBR) provides a mechanism for expressing rules and controlling packet forwarding based on policies defined by the network administrator. This is necessary to regulate network access; otherwise, all users could simply demand premium service quality. It is also necessary to take account of nontechnical issues, such as political, security, management, or budgetary concerns, or just matters of personal preference. PBT provides a powerful and flexible routing model that complements legacy routing protocols. With PBR the routing decision is not based simply on topology knowledge and metrics but on administrative policies. For example, the following policies could be defined:

Policy Rule 1:

Prohibit all e-mail traffic from using the international link WME-10 for security reasons, even if the link exceeds bandwidth and delay requirements.

Policy Rule 2:

R&D traffic is not permitted to transit the HQ backbone network.

Policy Rule 3:

Interactive traffic originating from PROD_LAN will be routed via next-hop router 181.4.3.1, while all other traffic will be routed via next-hop router 181.4.3.2.

PBR enables network planners to implement policies that selectively force packets to take different paths through the network, and it provides mechanisms to mark packets so that certain types of traffic receive preferential service when used in combination with scheduling techniques. For example, ISPs could use PBR to route traffic originating from different user groups or sites through different Internet connections. Enterprise network administrators could use PBR to distribute interactive and batch traffic over lower-cost wide area circuits, leaving mission- or business-critical traffic free to use high-bandwidth switched pipes.

PBR improves over legacy routing techniques by enabling packet flows to be routed using a criterion other than destination address. This means that traffic can be better distributed over the available links rather than taking the shortest path, leading to better overall utilization and lowering the probability of congesting the most attractive circuits. For example, policies could assign specific flows to forwarding paths based on criteria such as the following:

  • End system, network, or subnet source address

  • Application type (e.g., FTP, TFTP, Telnet, SMTP, DNS)

  • Protocol type (e.g., IP, IPX, AppleTalk, SNA)

  • Packet size (e.g., packets greater than 1,000 bytes could be routed differently from small packets)

Forwarding decisions may be specified quite tightly or with a degree of flexibility. For example, a particular flow could be directed to exit a router via a specific physical interface or list of interfaces. Alternatively, the flow could be directed to use a list of default routes, specific next-hop IP addresses, or a list of next-hop addresses. By offering multiple choices the availability of forwarding paths is improved.

Currently policy-based routing implementations are generally statically configured, and the features available are largely vendor-specific (since router/switch vendors implement scheduling and congestion control mechanisms in different ways on their platforms). Flows are typically classified at edge routers using filters or Access Control Lists (ACLs). Once classified, packets can be marked before injecting into the backbone by setting values in the IP precedence/ToS (IPv4) or flow label (IPv6) fields (see Figure 8.3). Different classes of service can be assigned to these tagged packets as they traverse the backbone by configuring resources (such as scheduling or congestion control facilities) to meet specific priority or delay requirements.

Note that PBR is effective but requires that all routers behave in a consistent manner in order to achieve service guarantees. In a multivendor network, routing and switching nodes may have dissimilar capabilities and be tuned differently to achieve similar behavior.

8.2.4 Constraint-Based Routing (CBR)

Constraint-Based Routing (CBR) evolved from earlier work on QoS-based routing but has a much broader scope. CBR paths are calculated subject to multiple constraints, including both QoS constraints and policy constraints. CBR aims to automate path selection using feedback from the network to meet flow requirements but within overall policy control. CBR can be considered a superset of both QBR and PBR. CBR uses a sophisticated approach to traffic engineering attempts to identify viable paths to meet the QoS requirements for a particular flow (or flow aggregate), based on multiple constraints [2]. Resolving forwarding paths at the flow level results in better overall circuit utilization and lower mean delay. Examples of constraints include availability, monetary cost, hop-count, reliability, delay, jitter, and Class of Service (CoS). CBR takes this further by considering the kind of policy constraints introduced in section 8.2.3. Path selection is influenced by dynamic information, such as flow characteristics, resource availability, resource utilization, topological status, and any static or dynamic policies defined by the network administrator.

Implementing CBR in a real-time network environment is far from easy, and there are several major issues to resolve, including the following:

  • Routing granularity—the level of detail used to calculate routes.

  • Maintaining topology state data—how to disseminate this additional dynamic state data quickly, without introducing significant overheads on circuit or processing entities.

  • Topological stability—how to maintain stability if CBR is constantly altering the traffic dynamics.

  • Optimizing the topology—with contradictory constraints the routing problem becomes difficult.

In order to perform CBR, a router or switch needs regular feedback on the state of a number of metrics concerning network utilization and availability. There must be a mechanism to distribute this additional state information. Once accurate information is collated, the router must compute routes based on additional constraint information.

Routing granularity

The granularity used to calculate routes with CBR has a fundamental influence on the efficiency of the overall network utilization produced with CBR. By granularity, we mean the amount of detail used as input to the routing calculation. Conventional routing is typically based upon destination address only; however, with CBR we may also be interested in routing calculation using source and destination addresses, class information, trunk capacity and utilization, delay, or flow data. As granularity increases so does the flexibility of the route calculation, leading to better efficiency in resource utilization and improving overall network stability (attractive high-capacity links are less likely to become swamped with traffic if the calculation is more sensitive to other factors such as delay and utilization). However, routing with multiple constraints, especially where constraints are contradictory, is difficult.

Maintaining topology state information and topological stability

As indicated, most current shortest-path—based IGPs are not adaptive. If the shortest path is congested, there is generally no feedback to reflect this in the link metric (i.e., a congested link could be considered temporarily unavailable; however, its metric still makes it attractive, and there is only one path to choose from for any (s, d) pair). Circuit utilization will vary with the state of active traffic flows, and to select optimum forwarding paths to meet QoS requirements a CBR-enabled router must be aware of the state of available resources at the time when the forwarding decision is made. This knowledge could be disseminated via a special signal protocol, or via extensions to existing IGP protocols. For example, bandwidth information could be distributed via extended OSPF (QOSPF) or IS-IS link state advertisements [21, 22].

The Spanning Trees produced by CBR are likely to be much more dynamic than dynamic routing protocols (the Spanning Tree topology created by conventional routing rarely changes once the network is stable). One major challenge here is the dynamic nature of bandwidth availability, since this can introduce stability problems for adaptive routing schemes. If the response to such dynamic feedback is overly sensitive, this can lead to routing instability, unless mechanisms are put in place to compress routing oscillations. In particular, in-band adaptive routing is even more vulnerable to instability, since it may actually contribute to existing problems by flooding state information too frequently over the very same links that are over-utilized. If network utilization is changing frequently, the router spends too much time recomputing routing tables and can become unresponsive. A trade-off must, therefore, be made between the need to maintain accurate state information and the need to minimize flooding to dampen topology changes. In practice it may be advantageous to use imprecise state information to make routing decisions rather than attempt to be overly accurate. Typically a hold-down timer is used to restrict the frequency of state advertisements, making the system less reactive. Reducing the computation complexity of the routers also helps to improve stability.

Routing table structure and size

The routing table structure and size depend directly on routing granularity and path metrics. The computation and storage overheads of CBR are, therefore, likely to be considerable when compared with conventional routing. Implementations may need to minimize the memory and CPU overheads of CBR. In practice, it may be advisable to run the IGP as normal for best-effort traffic and run CBR on demand to compute routes for new flows. This essentially trades computation time for a smaller storage requirement. Alternatively, coarse routing granularity could be used or techniques such as hop quantization (i.e., dividing all hop-count values into a few classes to reduce the number of columns in the routing table).

Optimizing routes

The routing algorithms used in CBR and the complexity of such algorithms, depend on the type and number of metrics to be included in route calculation [20]. We know that some of these constraints may be contradictory (e.g., cost versus bandwidth, delay versus throughput). It turns out that bandwidth and hop count are generally more useful constraints than delay and jitter, because very few applications cannot tolerate an occasional violation of such constraints, and since delay and jitter can be determined by the allocated bandwidth and hop-count of the flow path, these constraints can be mapped to bandwidth and hop-count constraints, if required. Another factor is that many real-time applications demand a certain amount of bandwidth. The hop-count metric of a route is also important, because the more hops a flow traverses, the more resources it consumes. For example, a 1-Mbps flow that traverses two hops consumes twice as many resources as one that traverses a single hop. Note that calculating optimal routes subject to constraints of two or more additive and/or multiplicative metrics belongs to a class of problem referred to as NP-Complete and must be resolved using heuristic techniques.

In CBR, routes can be precomputed for each traffic class or computed on demand (triggered by the receipt of the QoS request of a flow). Either way, a router will have to compute its routing table more frequently with CBR than with conventional dynamic routing, since routing table computation can be triggered by significant bandwidth changes in addition to topology changes. This additional complexity means that the computation overhead with CBR can be very high. Practical approaches to reducing this overhead include using a timer to reduce the computation frequency, choosing bandwidth and hop count as constraints, and using administrative policy to prune unsuitable links before calculating the routing table (e.g., if a flow has a maximum delay requirement, satellite links may be pruned before the routing table computation). With practical implementations of CBR there is also a trade-off between resource conservation and load balancing. A CBR scheme could choose one of the following options as a viable path for a flow:

  • Shortest-distance path—This approach is basically the same as dynamic routing. It emphasizes preserving network resources by choosing the shortest paths.

  • Widest-shortest path—This approach emphasizes load balancing by choosing the widest paths. It finds paths with minimum hop count and, if there are multiple such paths, the one with largest available bandwidth.

  • Shortest-widest path—This approach makes a trade-off between the two extremes. It favors shortest paths when network load is heavy and widest paths when network load is moderate. It finds a path with largest available bandwidth and, if there are multiple such paths, the one with the minimum hop count.

The last two cases consume more resources, which is not efficient when the network utilization is high. A trade-off must be made between resource conservation and load balancing. Simulations showed that the first approach consistently outperforms the other two for best-effort traffic, regardless of the network topology and traffic dynamics.

In practice, CBR must be implemented and deployed with care; otherwise, the cost of instability and increased complexity may outweigh the benefits. Since CBR is a superset of conventional dynamic routing, it is possible CBR may replace dynamic routing in the future as processing and memory resources in routers and switches continue to improve. A good example of a new intradomain CBR protocol, based on an existing dynamic routing protocol, is QOSPF. For the interested reader [20] provides comprehensive coverage of the issues involved in constraint-based routing, including formal analysis of key algorithms and heuristics used for routing unicast and multicast traffic under QoS constraints.

8.2.5 MultiProtocol Label Switching (MPLS)

MultiProtocol Label Switching (MPLS) is an emerging technology aimed at delivering improved IP traffic engineering tools to enable service providers to more easily manage, monitor, and meet various SLAs across their backbones. MPLS is basically a forwarding scheme, combining label swapping with Network Layer routing, having evolved from Cisco's tag switching. MPLS uses the intelligence in routers and the speed of switches to provide a mechanism to map IP packets onto reliable circuit-oriented protocols such as ATM and Frame Relay. MPLS is designed to offer scalability and efficiency. Since this scheme is independent of underlying protocols, it is called MultiProtocol Label Switching. MPLS is currently an Internet draft [23], and the initial effort from the MPLS working group is focused on developing a label-swapping standard for Layer 3 switching with IPv4 and IPv6. The core technology will subsequently be expanded to incorporate multiple Network Layer protocols. MPLS is not confined to any specific Data Link Layer technology. It can work with any medium over which Network Layer packets can be passed between Network Layer entities. The group started with Cisco's tag switching and IBM's Aggregate Route-Based IP Switching (ARIS). The issue of ATM interworking was not solved by tag switching, because cell and PDU interleaving occurs when the tag is identified with the VCI.

MPLS uses Layer 3 routing information to build forwarding (routing) tables and allocate resources. It uses Layer 2 (ATM, Frame Relay, etc.) to then forward the information over the appropriate path. A special MPLS label, attached to the IP header, is associated with a specific entry in the forwarding table and specifies the next hop. Flows that have common routing and service requirements typically take the same path through the network. The main benefit is a consistent level of service for flows that are of higher priority. A router that supports MPLS is called a Label Switch Router (LSR). An LSR examines only labels in packets to be forwarded. In order to work MPLS requires the deployment of Label Switching Routers (LSRs) in the network, which obviously affects how quickly this technology will be deployed. Implementations are being deployed now. A Label Distribution Protocol (LDP) is required to distribute labels in order to set up Label-Switched Paths (LSPs). A definition of QoS is incorporated into the MPLS header, which contains a 20-bit label, 8-bit TTL field, 3-bit class of service, 1-bit stack indicator, next header type indicator, and checksum.

MPLS is targeted for deployment on backbones initially. LSRs in the core or provider's PoP will interwork with the CPE. For example, a customer could be running CBQ to classify traffic and DiffServ to mark IP ToS, so that the provider network understands what service is required (as agreed upon in the SLA). The network edge devices will then map the Diff-Serv/ToS specification into the QoS field of the MPLS header, so that the service specification is preserved on an end-to-end basis as packets traverse the core.

Operation

The key feature provided by MPLS is the ability to provide Label-Switched Paths (LSPs), similar to Permanent Virtual Circuits (PVCs) in ATM and Frame Relay networks. An LSP is created by the concatenation of one or more label-switched hops, enabling a packet to be forwarded from one LSR to another across the backbone network. An LSR that receives an IP packet can choose to forward it along an LSP. It does this by encapsulating the packet inside an MPLS header and then forwarding it to another LSR. The labeled packet will be forwarded along the LSP by each LSR in turn until it reaches the end of the LSP, where the MPLS header will be removed and the packet will be forwarded based on Layer 3 information (such as the IP destination address). The key point here is that the path chosen for the LSP is not necessarily the IGP's shortest path, as illustrated in Figure 8.4.

click to expand
Figure 8.4: Router backbone, with some routers supporting MPLS.

The forwarding process of each LSR is based on the concept of label swapping. The labels are bound to IP prefixes and are link-local. When a packet containing a label arrives at an LSR, the LSR examines the label and uses it as an index into its forwarding table. Each entry in the forwarding table contains an inbound label, which is mapped to a set of forwarding information that is applied to all packets that carry the same inbound label (see Figure 8.5).

start figure

 InboundLBL     OutBoundIF     OutBoundLBL 20             5              210 256            4              650 50             1              32 8              6              760 37             3              10 

end figure

Figure 8.5: Label-switched forwarding table.

If an LSR receives a packet on Interface 2 with a label set to 50, the LSR uses the outbound data to forward the frame to Interface 1, with a new label of 32.

Traffic engineering with MPLS

Traffic enters and exits a backbone network from the network's border routers. In the context of traffic engineering, the border routers are called the ingress and egress points to and from the network. Traffic engineering is accomplished with MPLS by establishing LSPs between ingress points and egress points. We have already defined traffic engineering as the mapping of traffic onto a physical topology. This means that the real traffic engineering task for MPLS is determining the path for LSPs. There are several ways to route an LSP, including the following:

  • Calculate the full path for the LSP offline and statically configure all LSRs in the LSP with the necessary forwarding state. This is analogous to how some ISPs are currently using ATM.

  • Calculate the full path for the LSP offline and statically configure the head-end LSR with the full path. The head-end LSR then uses the Resource Reservation Setup Protocol (RSVP) as a dynamic signaling protocol to install forwarding state in each LSR. Note that RSVP is being used only to install the forwarding state, and it does not reserve bandwidth or provide any assurance of minimal delay or jitter. The Juniper Networks engineers were involved in specifying the new label object, explicit route object, and record route object for RSVP that allow it to operate as an LSP setup protocol.

  • Calculate a partial path for the LSP offline and statically configure the head-end LSR with a subset of the LSRs in the path. The partial path that is specified can include any combination of strict and loose routes. For example, imagine that an ISP has a topology that includes two east-west paths across the country: one in the north through Chicago and one in the south through Dallas. Now imagine that the ISP wants to establish an LSP between a router in New York and a router in San Francisco. The ISP could configure the partial path for the LSP to include a single loose-routed hop of an LSR in Dallas, and the result would be that the LSP will be routed along the southern path. The head-end LSR uses RSVP to install the forwarding state along the LSP.

  • Configure the head-end LSR with just the identification of the tail-end LSR. In this case, normal IP routing is used to determine the path of the LSP This configuration doesn't provide any value in terms of traffic engineering, but the configuration is easy and it might be useful in situations where services such as Virtual Private Networks (VPNs) are needed.

In all these cases, any number of LSPs can be specified as backups for the primary LSP. If a circuit on which the primary LSP is routed fails, the head-end LSR will notice because it will stop hearing RSVP messages from the remote end. If this happens, the head-end LSR can call on RSVP to create forwarding state for one of the backup LSPs.

Note that some vendors are extending their MPLS implementation to support CBR so that the network itself can participate in traffic engineering. This enables the head-end LSR to calculate the entire LSP based on certain constraints and then initiate signaling across the network. A key feature required to extend MPLS for CBR is bandwidth reservation. If we provide LSRs with the ability to request bandwidth, respond to such requests, and advertise the state of their bandwidth allocation, then LSP setup could be performed by negotiating with the network (and could take into account bandwidth on a given trunk already committed to flows between specific nodes). The advertisement of available and committed bandwidths could be provided through IS-IS or OSPF type-length-value attribute extensions.

Performance considerations

One misconception about MPLS is that it significantly enhances forwarding performance in routers. We know that IP forwarding is based on a longest-match lookup, while MPLS is based on an exact-match lookup (the same kind of lookup as the VPI/VCI lookup in ATM). Traditionally, fixed-length lookups in hardware are considerably faster than longest-match lookups in software. However, recent advances in silicon technology allow ASIC-based route lookup engines to run just as quickly, forwarding packets at line rates. The real benefit of MPLS is the increased traffic engineering capabilities that it offers.

MPLS benefits

The following should be considered when a possible MPLS deployment is discussed:

  • Growth of the Internet is exceeding the L3 processing capacity of traditional routers.

  • Enables routing to leverage the price, performance, and maturity of Layer 2 switching.

  • ATM switches can be augmented with IP routing.

  • MPLS forwarding is independent of current and future enhancements to Network Layer protocols.

  • Works over any Layer 2 datalink technology (e.g., ATM, Frame Relay, Ethernet, SONET, etc.).

  • Offers more efficient traffic engineering capabilities.

  • Uses label stacks rather than IP-OVER-IP encapsulation for tunnels.

  • Offers explicit routes.

  • ISP's need to support and deliver special services.

Previously, it was suggested that any emerging solution providing traffic engineering across the optical Internet must combine the advantages of ATM and routed cores while eliminating the disadvantages. Let's conclude this section by examining how well MPLS meets this challenge, as follows:

  • An MPLS core fully supports traffic engineering via LSP configuration. This permits the ISP to precisely distribute traffic across all of the links so the trunks are evenly used.

  • In an MPLS core, the per LSP statistics reported by the LSRs provide exactly the type of information required for configuring new traffic engineering paths and deploying new physical topologies.

  • In an MPLS core, the physical topology and the logical topology are identical. This eliminates the n-squared problem associated with ATM networks.

  • The lack of a cell tax in an MPLS core means that the provisioned bandwidth is used much more efficiently than in an ATM core.

An MPLS core converges the Layer 2 and Layer 3 networks required in an ATM-based core. The management of a single network reduces costs and permits routing and traffic engineering to occur on the same platform. This simplifies the design, configuration, operation, and debugging of the entire network. MPLS support for a dynamic protocol, such as RSVP, simplifies the deployment of traffic-engineered LSPs across the network. Future MPLS support for CBR achieves the same control as manual traffic engineering but with less human intervention, because the network participates in LSP calculation.



Data Networks. Routing, Seurity, and Performance Optimization
ActionScripting in Flash MX
ISBN: N/A
EAN: 2147483647
Year: 2001
Pages: 117

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net