MPLS in Brief

team lib

Why Multiprotocol Label Switching (MPLS)? A better question is, why not ATM? Despite IP's early popularity, ATM's connection-oriented features have always held an allure for networkers . Early ATM switches were faster than routers and much better at handling different types of traffic. But IP bigots never liked ATM's efficiency and scalability problems in an IP environment.

Even setting up and maintaining ATM was a pain for network operators. Someone had to be responsible for managing all the individual Permanent Virtual Circuits (PVCs) used to create an ATM network. Even if operators were prepared for that complexity, clients had to be prepared to pay ATM's famed " cell tax," the 20 percent overhead incurred when ATM switches segment large IP packets into small, fixed-length ATM cells .

ATM switches also ran into scalability problems. With the Segmentation and Reassembly (SAR) overhead inherent in chopping IP packets into cells, the maximum link performance depends on high-speed electronic components. These electronic components get expensive and difficult to manufacture at speeds greater than 2.5Gbits/sec. The highest-performing routed links can handle speeds four times faster.

Routing became a huge problem in large ATM networks when ATM routers were connected through ATM virtual circuits. Normally, engineers look to maximize network performance and create a full mesh of circuits connecting the routers; each router becomes adjacent to every other router on that network, regardless of its physical location.

The problem with typical routing protocols is that adjacent routers update each other with information about network changes. Networks comprised of a few routers generate nominal amounts of routing information, but the information increases as the number of routers increases . Every n (number of new routers) entails as much as n4 new updates. Since the amount of routing information grows so quickly, large networks can reach the point where the routing traffic overwhelms the router. Work-arounds are possible, but typically not without sacrificing performance or simplicity.

The MPLS Answer

MPLS aims to give routers not only greater speed than ATM switches but also the sophistication of a connection-oriented protocol. MPLS was designed to do this by enabling routers to make forwarding decisions based on short labels, thereby avoiding the complex packet-by-packet look-ups used in conventional routing.

Today, the need exists for the turbo-boost that MPLS gives routing. Advances in ASIC design have dramatically improved the speed of even conventional routers. It's clear, however, that MPLS enables carriers to better control the traffic flows in their networks. With traffic engineering, network designers can assign various parameters to links and then use that information to maximize the efficiency of their networks.

Since MPLS supports traffic engineering, carriers can eliminate multiple layers in their networks. Instead of running IP over ATM and worrying about configuring and maintaining both networks, MPLS practitioners aim to migrate many of ATM's functions to MPLS, and perhaps even eliminate the underlying ATM protocol. Finally, MPLS segregates different customers' traffic into separate VPNs. Although MPLS VPNs aren't encrypted, the very act of segregating the traffic provides a first line of defense.

Under The Covers

So just how does MPLS work this magic? Unlike typical routing, MPLS works on the idea of flows, or Forwarding Equivalence Classes (FECs) in MPLS parlance. Flows consist of packets between common endpoints identified by features such as network addresses, port numbers , or protocol types. Traditional routing reads the destination address and looks at routing tables for the appropriate route for each packet. Each router populates these routing tables by running routing protocols-such as RIP, OSPF, or Border Gateway Protocol (BGP)-to identify the appropriate route through the network.

By contrast, MPLS calculates the route once on each flow (or FEC) through a provider's network. The MPLS-compatible router embeds a label consisting of short, fixed-length values inside each frame or cell. Along the way, routers use these labels to reduce look-up time and improve scalability.

Of course, that's a gross simplification of a complex process. Let's look at the path that a flow might take between two points. When a packet leaves your PC, it makes its way across the network and ultimately hits a Label-Edge Router (LER), likely located at the entrance to the carrier's network.

The LER is the doorkeeper to the MPLS network and classifies packets as a member of an FEC. An FEC may be defined by its IP header, the interface through which a packet arrives, the packet type (multicast or unicast), or other information such as the Type of Service (ToS) field used to mark packets by DiffServ. Routing protocols modified for MPLS's unique requirements, such as OSPF with Traffic Engineering (OSPF-TE) and BGP-TE, gather the routing information needed to identify where to send the packet.

Once the LER determines the FEC's route, it inserts a label in each frame or cell. Typically, this label gets appended to the layer-2 (Ethernet, for example) header, but if there's no room, a small reference field is added that directs the router to the label's location inside the data field. If the underlying network is based on ATM, the label populates the Virtual Path Identifier/Virtual Channel Identifier (VPI/VCI) field. If the underlying network is frame-based , the label is enclosed in a shim between the data-link header and the IP header.

To ensure that capacity for the transmission is reserved end to end, the LER uses a Label Distribution Protocol (LDP)-such as Constraint-based Routing LDP (CR-LDP) or the RSVP-TE-to distribute the necessary labels that direct traffic along this route. Then the Label Switched Path (LSP) is established. Traffic sent onto this LSP traverses the desired route specified by the LER. Each Label-Switched Router (LSR) reads the specific label, finds the route where the packets should be forwarded in its table, and acts accordingly .

Traffic Engineering

As with any new protocol, MPLS carries its share of options and features, which some users have adopted with nearly Talibanesque fervor. An example is the specific LDPs used in MPLS networks. Everyone agrees that LDPs enable LSRs to reliably discover peers and establish communication using four different kinds of messages. Networkers can use these protocols to direct their traffic in accordance with certain predefined weights they've assigned to links in their networks.

click to expand
MPLS Operation. With MPLS, the LER can decide on the optimal path by accounting for considerations other than routing hops, such as line speed (a). Once the LSP is established, the packet is then properly labeled (b) and sent through the network.

Beyond that, there's been considerable debate as to the best approach. A number of vendors, most notably Cisco Systems and Juniper Networks, have advocated using RSVP-TE. Other vendors , chiefly Ericsson and Nortel Networks, have pushed CR-LDP.

The two approaches have a lot in common. Both protocols use similar Explicit Route Objects (EROs). Both protocols use ordered LSP setup procedures, and both include QoS information in their signaling messages to enable automatic resource allocation and LSP establishment.

The differences largely come down to orientation. As RSVP-TE extends the pre-existing protocol, RSVP, to support label distribution and explicit routing, there's already an installed base. CR-LDP, however, extends LDP, a comparatively new protocol, which was originally meant for hop-by-hop label distribution for explicit routing and QoS signaling. RSVP runs over IP and uses forward signaling. CR-LDP runs over TCP and uses reverse signaling.

Today, the debate has simmered down. Ericsson and Nortel support RSVP-TE as well as CR-LDP. The result is that RSVP-TE standards tend to be pushed through faster than CR-LDP. However, networkers can still expect to hear about LDP, says Paul Brittain, a senior network architect at Data Connection (www.dataconnection.com), a developer of MPLS gear. Carriers will still use LDP to distribute labels within their networks for establishing large flows between their POPs. RSVP-TE will likely be used at the customer premises to establish customer-specific flows between sites, for example.

MPLS Over Everything

MPLS gives providers better performance and better control over their networks. At the same time, though, MPLS cannot establish or alter physical connections.

Enter Generalized MPLS (GMPLS), once called Multiprotocol Lambda Switching. GMPLS extends MPLS to control not only routers but also Dense Wavelength Division Multiplexing (DWDM) systems, Add/Drop Multiplexors (ADMs), photonic cross-connects, and so on. With GMPLS, providers can dynamically provision resources and provide the necessary redundancy for implementing various protection and restoration techniques.

MPLS deals only with what GMPLS calls Packet Switch Capable interfaces; GMPLS adds four other interface types. Layer-2 Switch Capable interfaces can forward content-based data within frames and cells. TDM Capable interfaces forward data based on the data's time slot. Lambda Switch Capable interfaces, such as a photonic cross-connect, work on individual wavelengths or wavebands. Finally, Fiber Switch Capable interfaces work on individual or multiple fibers.

These different LSPs utilize the "nesting" feature inherent to MPLS: Within MPLS, smaller flows are aggregated into larger flows. The same basic concept applies to GMPLS, but think of the LSPs as virtual representations of physical constructs. So LSPs representing lower-order SONET circuits might be nested together within a higher-order SONET circuit. Similarly, LSPs that run between Fiber Switch Capable interfaces might contain LSPs that run between Lambda Switched Capable ones, which could contain those that run over TDM, which could include Layer-2 Switch Capable LSPs, which could, finally, include Packet Switch Capable LSPs.

Otherwise, GMPLS functions much like MPLS. LSPs are established by using RSVP-TE or CR-LDP to send a path/label-request message. This message contains a generalized label request, often an ERO, and specific parameters for the particular technology. The generalized label request is a GMPLS addition that specifies the LSP encoding type and the LSP payload type. The encoding type indicates the type of technology being considered , whether it's SONET or Gigabit Ethernet, for example. The LSP payload type identifies the kind of information carried within that LSP's payload. The ERO more or less controls the path that an LSP takes through the network.

The message traverses a series of nodes, as in MPLS, to reach its destination. The destination replies with the necessary labels, which are inserted into LSR's tables along the way. Once the reply reaches the initiating LER, the LSP can be established, and traffic is sent to the destination.

Resources

At the Multiprotocol Label Switching (MPLS) Resource Center (www.mplsrc.com), you'll find a great FAQ with loads of information about MPLS. The resource center's well-organized list of standards is also quite helpful.

Of course, if MPLS standards are really what you want, visit the IETF, the originator of the MPLS standard, at www.ietf.org.

Many MPLS vendors offer in-depth technical white papers. Data Connection (www.dataconnection.com), a developer of MPLS gear, offers papers that are both clear and unbiased .

This tutorial, number 163, by David Greenfield, was originally published in the February 2002 issue of Network Magazine.

 
team lib


Network Tutorial
Lan Tutorial With Glossary of Terms: A Complete Introduction to Local Area Networks (Lan Networking Library)
ISBN: 0879303794
EAN: 2147483647
Year: 2003
Pages: 193

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net