5.3 Routers

 < Day Day Up > 



Routers are similar to bridges in that both provide filtering and bridging functions across the network. But while bridges operate at the physical and data link layers of the OSI reference model, routers join LANs at the network layer (see Figure 5.3). Routers convey LAN protocols over the WAN to remote locations. Routers distinguish among network layer protocols—such as IP, Internet package exchange (IPX), and AppleTalk—and make intelligent packet delivery decisions using an appropriate routing protocol. They can be used to segment a network with the goals of limiting broadcast traffic and providing QoS and redundant paths. Some routers can perform packet filtering to control the kind of traffic that is allowed to pass through them, providing a firewall that enforces security policy.

click to expand
Figure 5.3: Routers operate at the network layer of the OSI reference model.

Routers may be deployed in mesh as well as point-to-point networks and can perform bridging over separate interfaces. A router can also provide multiple types of interfaces, including those for T-carrier and optical carrier, frame relay, ISDN, ATM, cable networks and digital subscriber line (DSL) services, among others.

Although routers include the functionality of bridges, they differ from bridges in the following other ways: they generally offer more embedded intelligence and, consequently, more sophisticated network management and traffic prioritization capabilities than bridges. Unlike bridges, routers also offer flow control and error protection capabilities. Perhaps the most significant distinction between a router and a bridge is that a bridge delivers packets of data on a best-effort basis, whereas a router takes a more intelligent approach to getting packets to their destination—by selecting the most economical path (i.e., least number of hops) based on its knowledge of the overall network topology, as defined by its internal routing table.

5.3.1 Types of Routing

There are two types of routing: static and dynamic. In static routing, the network manager configures the routing table to set fixed paths between two routers. Unless reconfigured, the paths on the network never change. Although a static router will recognize that a link has gone down and report the event, it will not automatically reroute traffic. A dynamic router, on the other hand, reconfigures the routing table automatically and recalculates the most efficient path in terms of offered traffic load, available bandwidth, and number of hops (routers) to the destination.

Some routers balance the traffic load across multiple access links, providing an N × T1 inverse multiplexer function. This allows multiple T1 access lines operating at 1.544 Mbps each to be used as a single higher bandwidth facility. If one of the links fails, the other links remain in place to handle the offered traffic. As soon as the failed link is restored to service, traffic is spread across the entire group of lines per the original configuration. If all the available bandwidth is in use, additional traffic will be held back until bandwidth is freed. Applications will get the bandwidth based on their priority designations.

5.3.2 Types of Routers

Routers span the gamut from relatively simple products for home and small-office use, to more complex products for the enterprise, to the largest routers used on WAN backbones that may be metropolitan, regional, or national in scope. Today’s routers are highly modular and upgradeable via software to provide more features and handle more protocols.

Consumer-Class Routers

At the low end is a consumer class of routers that provides shared access to the Internet over cable or DSL. These fixed-configuration devices include network address translation (NAT), which provides a basic level of protection for the home network by hiding internally used IP addresses from the public Internet. (This is different from a true firewall, which uses rules to determine whether traffic is allowed to pass into or out of the home network.) Acting as a Dynamic Host Configuration Protocol (DHCP) server for the network, the cable/DSL router is the only externally recognized Internet device on the internal network. All the users given an IP address by the router are hidden behind the NAT mechanism, which filters incoming and outgoing requests, helping to keep unwanted traffic off the LAN. The router can also be set up to block internal users’ access to the Internet. And while a typical router may have to rely on an external hub or switch to share its Internet connection, today’s cable/DSL routers also include the functions of a switch, allowing each port to provide the full duplex speed of 10/100 Mbps.

Some of these products are growing in sophistication to support telecommuting environments through the creation of virtual private networks (VPNs) that allow users to access the corporate network over secure connections through the Internet. At the same time, such routers can be configured so that one or more of the private IP addresses can be dissembled from the rest of the address block to become a public IP address so outside users can access an internal user without getting blocked by the firewall. An example of when this configuration might be necessary is to support a user who wants to run an audio or video application like Microsoft’s NetMeeting and needs to be able to place calls and accept calls via the PC. Normally, such traffic would be blocked by the firewall. Another way to run such applications is to configure the firewall so that the ports used by NetMeeting and similar applications are not blocked. The port numbers for these applications are usually listed on the Web sites of the product vendors.

Access Routers

This is another type of low-end product, but one that is intended for corporate environments, specifically branch offices. These are usually modular devices available in Ethernet and token-ring versions, which support a limited number of protocols and physical interfaces. They provide connectivity to high-end multi-protocol routers, allowing large and small nodes to be managed as a single logical enterprise network.

These devices provide routing, firewall, and VPN functions. When configured for VPN use, some routers can be equipped with optional hardware-based encryption to offer throughput performance up to full-duplex T1/E1 speeds. For voice applications, these routers support analog and digital communications with the addition of various WAN interface cards, allowing them to work with the existing telephone infrastructure, while providing VoIP or voice over frame relay (VoFR). Supported WAN technologies include broadband DSL, ISDN, leased lines, and frame relay. For serial connectivity, an optional integrated CSU/DSU is also available.

Backbone Routers

Backbone routers are used for building highly meshed internetworks. In addition to allowing several protocols to share the same logical network, these devices aggregate bandwidth from multiple smaller sites, pick the shortest path to the end node, balance the load across multiple physical links, reroute traffic around points of failure or congestion, and implement flow control in conjunction with the end nodes. They also provide the means to tie remote branch offices into the corporate backbone, which might use such WAN services as TCP/IP, T1, ISDN, and ATM. Some vendors also provide an optional interface for switched multi-megabit data service (SMDS), but support for this service is scheduled to end in 2003. The few remaining carriers that support SMDS have moved their customers to other fast-packet technologies.

The backbone routers support a variety of optical technologies, including SONET and resilient packet ring (RPR), which doubles network bandwidth over traditional SONET rings while maintaining sub-50-ms network restoration. This makes such routers useful for extending Ethernet and IP services across metropolitan areas. For companies without metro infrastructures of their own, the managed Ethernet services offered by carriers may be an economical alternative. The carrier supplies and manages the backbone router, which is located on the customer premises. In essence, the customer’s location becomes a node on the carrier’s network.

5.3.3 Routing Protocols

Each router on the network keeps a routing table and moves data along the network from one router to the next using such protocols as the Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). There are numerous other special-purpose routing protocols, including the Border Gateway Protocol (BGP), Resource Reservation Protocol (RSVP), and Protocol Independent Multicast (PIM). Other protocols can be added to routers to enhance quality of services, such as Differentiated Services (Diff-Serv). One of the latest innovations in routing protocols is Multi-Protocol Label Switching (MPLS), which delivers QoS and security capabilities over IP networks, including VPNs.

Routing Information Protocol

RIP is the older of the two major routing protocols. Although still supported by many vendors, it does not perform well in today’s increasingly complex networks. As the network expands, routing updates grow larger under RIP and consume more bandwidth to convey the information to other routers on the network. When a link fails, the RIP update procedure slows route discovery, increases network traffic and bandwidth usage, and may cause temporary looping of data traffic. Also, RIP cannot calculate routes based on such factors as delay and bandwidth, and its line selection facility is capable of choosing only one path to each destination.

Open Shortest Path First

The newer routing standard, OSPF, overcomes the limitations of RIP and even provides capabilities not found in RIP. The update procedure of OSPF requires that each router on the network transmit a packet with a description of its local links to all other routers. On receiving each packet, the other routers acknowledge it, and in the process, distributed routing tables are built from the collected descriptions. Since these description packets are relatively small, they produce a minimum of overhead. When a link fails, updated information floods the network, allowing all the routers to simultaneously calculate new tables.

Border Gateway Protocol

As more mission-critical business applications become available over the Internet and corporate intranets and extranets, there is increasing need for equipment-based restoral/protection processes to keep paths continuously available. On a highly meshed TCP/IP-based intranet, for example, routers are capable of diverting traffic around failed nodes or points of congestion, but often the access links must be protected as well.

One way to do this is by implementing the BGP, which balances the traffic between two carriers. When different paths to the same destination are available, BGP chooses the single best path for reaching that destination based on such metrics as next hop, administrative weights, local preference, origin of the route, and path length. Once the path is selected, all the traffic goes out over that path. An alternative to per-destination load balancing is per-packet load balancing, whereby individual packets are distributed evenly between the two links. In either scenario, if one of the links goes down, the other link takes all of the traffic.

Resource Reservation Protocol

For real-time applications like VoIP, this protocol reserves network resources and prioritizes traffic to guarantee QoS over the IP network. RSVP runs on top of IP to provide receiver-initiated setup of resource reservations on behalf of an application data stream. When an application requests a specific QoS for its data stream, RSVP is used to deliver the request to each router along the path the data stream will take and maintain router and host states to support the requested level of service. In this way, RSVP essentially allows a router-based network to mimic a circuit-switched network on a best-effort basis.

At each node, the RSVP program applies a local decision procedure, called admission control, to determine if it can supply the requested QoS. If admission control succeeds, the RSVP program in each router passes incoming data packets to a packet classifier that determines the route and the QoS for each packet. The packets are then queued as necessary in a packet scheduler that allocates resources for transmission on a particular link. If admission control fails at any node, the RSVP program returns an error indication to the application that originated the request.

RSVP is a stateful protocol, meaning that the network nodes must coordinate with each other to set up a path, and then remember state information about the flow. This can be an overwhelming task on the Internet, where millions of flows may exist across a router. The RSVP approach is now considered too unwieldy for the Internet but appropriate for smaller enterprise networks.

Differentiated Services

Applications like Web browsing and e-mail work well with the best-effort QoS provided by the Internet. With best-effort services, however, data can easily be lost or delayed. The emergence in recent years of high-bandwidth and delay-sensitive applications such as VoIP, video over IP, and VPNs has prompted development of improved Internet QoS levels. The older RSVP, based on a sophisticated perconnection signaling system, requires routers in the network to agree to a specific level of service. For this reason, RSVP is complex to deploy and does not scale well.

One of the newest attempts to remedy the deficiencies of RSVP is Diff-Serv, a simple technology that allows large corporate IP backbone users to quickly deploy different QoS levels. Diff-Serv employs a stateless approach that minimizes the need for nodes in the network to remember anything about flows. This method is not as good as the stateful approach but more practical to implement across the Internet. Diff-Serv devices at the edge of the network mark packets in a way that describes the service level they should receive. Network elements simply respond to these markings without the need to negotiate paths or remember extensive state information for every flow. In addition, applications do not need to request a particular service level or provide advance notice about where traffic is going.

Instead of the complex dynamic signaling of RSVP, various types of traffic requiring different QoS have different tags applied. With Diff-Serv, instead of handling each voice connection separately, for example, all traffic with the same tag is treated in the same way. This built-in aggregation mechanism is an important reason why Diff-Serv can scale to support larger environments.

A router’s forwarding process modified by a Diff-Serv marking is known as a per-hop-behavior (PHB). A PHB defines forwarding behavior that stretches across a network to provide a particular class of service. Among the PHBs are default and expedited. Default PHB is defined as today’s best-effort service. Expedited PHB is the other extreme: low absolute delay, low delay variation, and low packet loss.

Diff-Serv is implemented in two types of routers: traffic conditioners and DS-capable. Traffic conditioners perform sophisticated traffic classification, monitoring, shaping, scheduling, and marking. They are most likely to be access routers. DS-capable routers have scheduling capabilities and modify their forwarding behavior based on the markings. They are most likely to be backbone routers. This separation of function is another reason for the simplicity of Diff-Serv—most of the complexity is in the traffic conditioner, which is at the edge of the network. At the same time, DS-capable routers in the core need only support modified forwarding operations.

Protocol Independent Multicast

For applications that rely on content streaming, network performance can be greatly improved with PIM. With this routing protocol, instead of sending out 100 information streams to 100 subscribers, only one information stream is sent from the source server. The multicast routers replicate and distribute the stream within the network to only the nodes that have subscribers (see Figure 5.4) who requested the stream through a registration process. The larger the size of the distribution list, however, the more multicast will impact router performance.

click to expand
Figure 5.4: In PIM, streaming content goes out the server one time and is replicated at the RP to reach the nearest subscribers who have specifically requested the stream. This method of content delivery reduces the processing burden of the source server and conserves network bandwidth.

When subscribers join a multicast group, the directly connected routers send PIM “join” messages to the rendezvous point (RP). The RP keeps track of multicast groups. Servers that send multicast packets are registered with the RP by the first-hop router. The RP then sends join messages toward the source. At this point, packets are forwarded on a shared distribution tree. The result is that content providers no longer need to purchase enormous amounts of bandwidth to accommodate a large number of subscribers or buy multiple high-capacity servers to send out all the data streams. Instead, a single data stream is sent, the size of which is based on the type of content.

A multicast can reach potentially anyone who specifically subscribes to the session—whether they have a dedicated connection or use a dial-up modem connection. Of course, the content originator can put distance limits on the transmission and restrict the number of subscribers that will be accepted for any given program.

A variety of methods can be used to advertise a multicast. A program guide can be sent to employees and other appropriate parties via e-mail or it can be posted on a Web site. If the company already has an information channel on the Web that delivers content to subscribers, the program guide can be one of the items “pushed” to users when they access the channel.

When a person wants to receive a program, he or she enrolls through an automated registration procedure. The request is sent to the server running the multicast, which adds the subscriber’s IP address to its subscriber list. In this way, only users who want to participate will receive packets from the server.

The user also selects a multicast node from those listed in the program guide. Usually, this will be the router closest to the user’s location. The user becomes a member of this particular node. Group membership information is distributed among neighboring routers so that multicast traffic gets routed only along paths that have subscribers at the end nodes. From the end node, the data stream is delivered right to the user’s computer.

Once the session is started, users can join and leave the multicast group at any time. The multicast routers adapt to the addition or deletion of network addresses dynamically, so the data stream gets to new destinations when users join and stops the data stream from going to destinations that no longer want to receive the session.

Multiprotocol Label Switching

With the explosive growth of the Internet in recent years, there is growing dissatisfaction with its performance. New techniques are available to improve performance on private intranets as well, such as multiprotocol label switching (MPLS), which delivers QoS and enforces security over IP networks.

MPLS attaches tags, or labels, to IP packets as they leave the edge router and enter the MPLS-based network. The labels eliminate the need for intermediate router nodes to look deeply into each packet’s IP header to make forwarding and class-of-service handling decisions. The result is that packet streams can pass through an MPLS-based-WAN infrastructure very fast, and time-sensitive traffic can get the priority treatment it requires.

The same labels that distinguish IP packet streams for appropriate class-of-service handling also provide secure isolation of these packets from other traffic over the same physical links. Since MPLS labeling hides the real IP address and other aspects of the packet stream, it provides data protection at least as secure as other Layer 2 technologies, including frame relay and ATM.

To enhance the performance of IP networks, the various routes are assigned labels. Each node maintains a table of label-to-route bindings. At the node, a label switch router (LSR) tracks incoming and outgoing labels for all routes it can reach, and it swaps an incoming label with an outgoing label as it forwards packet information (see Figure 5.5). Since MPLS routers do not need to read as far into a packet as a traditional router does and perform a complex route lookup based on destination IP address, packets are forwarded much faster, which improves the performance of the entire IP network.

click to expand
Figure 5.5: A label-switched route is defined by fixed-length tags appended to the data packets. At each hop, the LSR strips the existing label and applies a new label, which tells the next hop how to forward the packet. These labels enable the data packets to be forwarded through the network without the intermediate routers having to perform a complex route lookup based on destination IP address.

Although MPLS routers forward packets on a hop-by-hop basis, just like traditional routers, they operate more efficiently. As a packet arrives on an MPLS node, its label is compared to the label information base (LIB), which contains a table that is used to add a label to a packet, while determining the outgoing interface to which the data will be sent. After consulting the LIB, the MPLS node forwards the packet toward its destination over a label switched path (LSP). The LIB can simplify forwarding and increase scalability by tying many incoming labels to the same outgoing label, achieving even greater levels of efficiency in routing. The LSPs can be used to provide QoS guarantees, define and enforce service-level agreements, and establish private user groups for VPNs.

MPLS provides a flexible scheme in that the labels could be used to manually define routes for load sharing or to establish a secure path. A multilevel system of labels can be used to indicate route information within a routing domain (interior routing) and across domains (exterior routing). This decoupling of interior and exterior routing means MPLS routers in the middle of a routing domain would need to track less routing information. That, in turn, helps the technology scale to handle large IP networks.

MPLS could provide a similar benefit to corporations that have large ATM-based backbones with routers as edge devices. Normally, as such networks grow and more routers are added, each router may need additional memory to keep up with the increasing size of the routing tables. MPLS alleviates this problem by having the ATM switches use the same routing protocols as routers. In this way, the routers on the edge of the backbone and the ATM-based label switches in the core would maintain summarized routing information and only need to know how to get to their nearest neighbor—not to all peers on the network.

In terms of virtual networks, MPLS offers significant advantages over IP-based VPN tunneling protocols such as Internet Protocol with security (IPsec). In the latter case, VPN relationships are established between known endpoints, and large-scale deployment requires planning and coordination to address issues of policy synchronization and peering configuration. MPLS networks, on the other hand, are more scalable since no site-to-site peering is required. Labels are used instead of predefined user relationships. A typical MPLS-based VPN deployment is capable of supporting tens of thousands of VPN groups over the same network.

This makes MPLS ideally suited for the core of the network where QoS, traffic engineering, and bandwidth utilization can be fully controlled, especially if service-level guarantees are offered in conjunction with the VPN. IPsec is best at the edge of the network where there is a higher degree of exposure to data privacy and where the security mechanisms of IPsec can best be applied, such as tunneling and encryption. MPLS offers security as well, but this is achieved from provisioning virtual circuits that separate traffic between different organizations or departments within an organization to create a trusted environment similar to that offered by a frame relay or ATM network.

MPLS also offers benefits to Internet service providers and carriers. It allows Layer 2 switches to participate in Layer 3 routing. This increases network scalability because it reduces the number of routing peers that each edge router must deal with. It also enables new traffic tuning mechanisms in router-based networks by integrating virtual circuit capabilities previously available only in Layer 2 fabrics. With label switching, packet flows can be directed across the router network along predetermined paths, similar to virtual circuits, rather than along the hop-by-hop routes of normal routed networks. This enables routers to perform advanced traffic management tasks, such as load balancing, in the same manner as ATM or frame relay switches.

Finally, MPLS can be applied not only to the IP networks, but to any other network-layer protocol as well. This is because tag switching is independent of the routing protocols employed. While the Internet runs on IP, a lot of campus backbone traffic is transported on protocols such as IPX, making a pure IP solution inadequate for many organizations.

MPLS came about as a result of Cisco’s tag switching concept, which was given over to the IETF for further development and standardization. In 1996, the framework document published by the IETF presented MPLS as a label-switching architecture suitable for any protocol. The label process takes place without referencing the content of the data packet, eliminating the need for protocol-specific handling. By having the data-handling layer of MPLS separate from the control layer, multiple control layers—one for each protocol—could be supported. The IETF, however, focused on MPLS as a means of improving IP networking, where the commercial opportunity is greatest. MPLS may encourage more service providers to migrate core infrastructures from ATM to IP. Now that MPLS provides IP with high speed, QoS, and security, there may be less reason for service providers to build an ATM infrastructure, which provides these advantages but at a much higher cost than IP.



 < Day Day Up > 



LANs to WANs(c) The Complete Management Guide
LANs to WANs: The Complete Management Guide
ISBN: 1580535720
EAN: 2147483647
Year: 2003
Pages: 184

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net