Planning and Designing WANs


With the overview of WAN technologies out of the way, let’s take a deeper look at the actual planning and design process. First, we’ll look at the methodology used in design: PDIOO (planning, design, implementation, operation, and optimization). After that, we will discuss the important criteria you need to consider when completing the design.

PDIOO

PDIOO stands for planning, design, implementation, operation, and optimization. This methodology for designing the Enterprise Edge WAN can be broken down into three main steps:

  1. Analyze the customer requirements.

  2. Characterize the existing network.

  3. Design the topology.

Let’s spend a few moments on each of these steps.

Analyze the Customer Requirements

There are many things to learn about the customer’s network before proceeding further with the design. Application traffic and flows must be discovered and mapped. You must anticipate new applications, network growth, and the relocation of existing application traffic. Do any of the new applications have specific network requirements (VoIP, for example)? Is growth projected? Where? Answers to these questions will help prevent networks from becoming obsolete before they are implemented.

Characterize the Existing Network

This step includes an inventory of current network assets, as well as a characterization of the current network’s ability to expand to meet new network requirements. Do the network devices have the memory, processing power, and other capabilities to handle the new requirements? Can my current routers handle IPSec and QoS? Can some of them? How have current Layer 2 technologies been working? Can you leave some of these pieces in place? Answers to these questions help the designer to understand the starting point of the implementation.

Design the Topology

Once the preceding steps have been completed, the next step is to create the network topology. This should consider existing resources, but should also include the new requirements such as backup or redundant connections, bandwidth issues, and software and QoS requirements.

WAN Design Criteria

WAN designs should minimize the cost of bandwidth and optimize bandwidth efficiency. For example, you want to minimize the cost of bandwidth and optimize bandwidth efficiency between the corporate office and the remote offices.

As a company grows, it’s imperative that its internetwork grow with it. The network’s administrator must understand the various user-group differences regarding their specialized needs for the melange of LAN and WAN resources and find a way to meet—or better yet exceed—these requirements while planning for growth as well. The following important factors must be considered when defining and researching business requirements for the purposes of internetwork design or refinement:

Availability Because networks are so heavily relied upon—they’re ideally up and running 24 hours a day, 365 days a year—failures and down time must be minimized. It’s also vital that when a failure does occur, it’s easy to isolate so that the time needed to troubleshoot the problem is reduced.

Bandwidth Accurately determining the actual and eventual bandwidth requirements with information gathered from both users and management is crucial. It can be advantageous to contract with a service provider to establish connectivity between remote sites. Bandwidth considerations are also an important element for the next consideration—cost.

Cost In a perfect world, you could install nothing but Cisco switches that provide switched 100Mbps to each desktop with gigabit speeds between data closets and remote offices. However, since the world’s not perfect and often budget constraints simply won’t allow for doing that, Cisco offers an abundance of switches and routers tailored to many wallet sizes. This is one major reason why it’s so important to accurately assess your actual needs. A budget must be carefully delimited when designing an internetwork.

Ease of management The ramifications associated with creating any network connections, such as the degree of difficulty, must be understood and regarded carefully. Factors associated with configuration management include analyses of both the initial configuration and the ongoing configuration tasks related to running and maintaining the entire internetwork. Traffic management issues—the ability to adjust to different traffic rates, especially in bursty networks—also apply here.

Types of application traffic Application traffic can be typically composed of small to very large packets, and the internetwork design must reflect and regard the typical traffic type to meet business requirements.

Routing protocols The characteristics of these protocols can cause some ugly problems and steal a lot of precious bandwidth in networks where they’re either not understood or not configured properly.

Implementation Issues

Connectivity between LANs and WANs implies routing. Any WAN design must consider the “off ramp” to the LAN, and that is the router. Here, you look at the routing process inside a router. This process comes into play each time information goes from LAN to WAN and back again. Finally, you look at the switching process inside the router. Cisco devices offer several types of switching paths, and understanding this technology is required to correctly design WAN access devices.

The Routing Process

Routers are Layer 3 devices that are used to forward incoming packets to their destination by using logical addressing. IP addresses are logical addresses. Routers share information about these logical addresses with each other, and this information is stored in route tables. The router uses the route table to map the path through the router to the destination IP address.

Two processes must be present for routing to work properly. First is path determination, which means that the router knows a route that leads to the desired destination address. The second is actually moving the packet from the inbound interface to the proper outbound interface.

For example, suppose a packet is forwarded to Router B via the routing process. Router B tells Router A that it knows a route to the destination address. Once Router B has the packet, it finds the outgoing port associated to the destination address of 172.16.1.10. Once the route is found, the router moves the packet to the outgoing interface Serial 1. After the packet reaches interface Serial 1, it is routed toward the destination of 172.16.1.10.

This is the basic routing process. For routes to be shared among adjacent routers, a routing protocol must be used. Routing protocols are used for routers to be able to calculate, learn, and advertise route table information.

Metrics are associated with each route that is present in the route table. Metrics are calculated by the routing protocol to define the cost of getting to the destination address. Some algorithms use hop count (the number of routers between it and the destination address), whereas others use a vector of values.

Once a metric is assigned to a route, a router advertises this information to all adjacent routers. Thus, each router maintains a topology and map of how to get to connected networks. By connected, we do not mean directly connected, but simply that there exists some type of network connection between the destination address and the router.

Switching Modes of Routers

The switching path—the logical path that a packet follows when it’s switched through a router—takes place at Layer 3 of the OSI model. There are many types of switching, and it is important not to confuse them. This section successfully explains methods used by routers to move a packet from an incoming interface to the correct outgoing interface. By using switching paths, extra lookups in route tables are eliminated, and processing overhead is reduced.

The router’s physical design and its interfaces allow for a variety of switching processes on the router. This frees up the processor to focus on other tasks instead of looking up the source and destination information for every packet that enters the router.

We have already discussed router architecture, so let’s focus directly on the details of each switching type. The most processor-intensive method (process switching) is discussed first; the discussion ends with the most efficient method of switching (Cisco Express Forwarding).

Process Switching

As a packet arrives on an interface to be forwarded, it eventually is copied to the router’s process buffer, and the router performs a lookup on the Layer 3 address. (Eventually means that there are a few steps before the packet is copied to the route processor buffer.) Using the route table, an exit interface is associated with the destination address. The processor encapsulates and forwards the packet with the added new information to the exit interface while the router initializes the fast switching cache. Subsequent packets that require process switching and are bound for the same destination address follow the same path as the first packet.

Overhead ensues because the processor is occupied with Layer 3 lookups— determining which interface the packet should exit from and calculating the cyclical redundancy checksum (CRC) for the packets. If every packet required all of that to be routed, the processor could get really bogged down. The answer is to use fast switching whenever and wherever possible.

Fast Switching

Fast switching is an enhancement from process switching. The first packet of a new session is copied to the interface processor buffer. The packet is then copied to the bus and sent to the switch processor. A check is made against other switching caches (for example, silicon or autonomous) for an existing entry. Fast switching is then used because no entries exist within the more efficient caches. The packet header is copied and sent to the route processor, where the fast switching cache resides. Assuming that an entry exists in the cache, the packet is encapsulated for fast switching and sent back to the switch processor. Finally, the packet is copied to the buffer on the outgoing interface processor. From there, it is sent out the interface.

Fast switching is on by default for lower end routers like the 4000/2500 series. Sometimes, it’s necessary to turn fast switching off when troubleshooting network problems. Because packets don’t move across the route processor after the first packet is process-switched, you can’t see them with packet-level tracing. It’s also helpful to turn off fast switching if the interface card’s memory is limited or consumed, or to alleviate congestion when low-speed interfaces become flooded with information from high-speed interfaces.

Autonomous Switching

Autonomous switching works by comparing packets against the autonomous switching cache. You probably recognize a pattern by now. When a packet arrives on the interface processor, it checks the switching cache closest to it. So far, all of these caches reside on other processor boards. The same is true of autonomous switching. The silicon switching cache is checked first; then the autonomous cache is checked. The packet is encapsulated for autonomous switching and sent back to the interface processor. Notice that this time, the packet header was not sent to the route processor.

Autonomous switching is available only on AGS+ and Cisco 7000 series routers that have high-speed controller interface cards.

Silicon Switching

Silicon switching is available only on the Cisco 7000 with an SSP (Silicon Switch Processor). Silicon-switched packets are compared to the silicon switching cache on the SSE (silicon switching engine). The SSP is a dedicated switch processor that offloads the switching process from the route processor, which provides a fast switching solution. However, packets must still traverse the backplane of the router to get to the SSP and then back to the exit interface.

Optimum Switching

Optimum switching follows the same procedure as the other switching algorithms. When a new packet enters the interface, it is compared to the optimum switching cache, rewritten, and sent to the chosen exit interface. Other packets associated with the same session then follow the same path. All processing is carried out on the interface processor, including the CRC. Optimum switching is faster than both fast switching and NetFlow switching, unless you have implemented several access lists.

Optimum switching replaces fast switching on the high-end routers. As with fast switching, optimum switching also needs to be turned off to view packets while troubleshooting a network problem.

Distributed Switching

Distributed switching happens on the VIP (Versatile Interface Processor) cards (which have a switching processor onboard), so it’s very efficient. All required processing is done right on the VIP processor, which maintains a copy of the router’s routing cache. With this arrangement, even the first packet doesn’t need to be sent to the route processor to initialize the switching path, as it does with the other switching algorithms. Router efficiency increases as more VIP cards are added.

NetFlow Switching

NetFlow switching is really more of an administrative tool than a performance-enhancement tool. It collects detailed data for use with circuit accounting and application-utilization information. Due to all the additional data that NetFlow collects (and may export), expect an increase in router overhead—possibly as much as a five percent increase in CPU utilization.

NetFlow switching can be configured on most interface types and can be used in a switched environment. ATM, LAN, and VLAN technologies all support NetFlow switching; the Cisco 7200 and 7500 series routers provide its implementation.

NetFlow switching does much more than just switching—it also gathers statistical data, including protocol, port, and user information. All of this is stored in the NetFlow switching cache according to the individual flow that’s defined by the packet information (destination address, source address, protocol, source and destination port, and the incoming interface). The data can be sent to a network management station to be stored and processed there.

The NetFlow switching process is very efficient. An incoming packet is processed by the fast or optimum switching process, and then all path and packet information is copied to the NetFlow cache. The remaining packets that belong to the flow are compared to the NetFlow cache and forwarded accordingly.

The first packet that’s copied to the NetFlow cache contains all security and routing information, and if an access list is applied to an interface, the first packet is matched against it. If it matches the access list criteria, the cache is flagged so that the remaining packets in the flow can be switched without being compared to the list. (This is very effective when a large amount of access list processing is required.)

Do you remember reading that distributed switching on VIP cards is really efficient because it lessens the load to the Route/Switch Protocol (RSP)? Well, NetFlow switching can also be configured on VIP interfaces.

NetFlow gives you amenities such as the security flag in the cache that allows subsequent packets of an established flow to avoid access list processing. It’s comparable to optimum and distributed switching and is much better if access lists (especially long ones) are placed in the switching path. However, the detailed information NetFlow gathers and exports does load down the system, so plan carefully before implementing NetFlow switching on a router.

Cisco Express Forwarding

Cisco Express Forwarding (CEF) is a switching function designed for high-end backbone routers. It functions on Layer 3 of the OSI model, and its biggest asset is the capability to remain stable in a large network. However, it’s also more efficient than both the fast and optimum default switching paths.

CEF is wonderfully stable in large environments because it doesn’t rely on cached information. Instead of using a CEF cache, it refers to two alternate resources. The Forwarding Information Base (FIB) consists of information duplicated from the IP route table. Every time the routing information changes, the changes are propagated to the FIB. Thus, instead of comparing old cache information, a packet looks to the FIB for its forwarding information. CEF stores the Layer 2 MAC addresses of connected routers (or next hop) in the adjacency table.

Even though CEF features advanced capabilities, you should consider several restrictions before implementing CEF on a router. According to the document “Cisco Express Forwarding,” available from the Cisco web page CCO http://www.cisco.com/warp/public/cc/pd/iosw/iore/tech/ cef_wp.htm, system requirements are quite high. The processor should have at least 128MB of RAM, and the line cards should have 32MB each. CEF takes the place of VIP distributed switching and fast switching on VIP interfaces. The following features aren’t supported by CEF:

  • ATM Data Exchange Interface (DXI)

  • Token Ring

  • Multi-point PPP

  • Access lists on the Gigabit Switch Router (GSR)

  • Policy routing

  • Network Address Translation (NAT)

  • Switched Multimegabit Data Service (SMDS)

Nevertheless, CEF does many things—even load balancing is possible through FIB. If there are multiple paths to the same destination, the IP route table knows about them all. This information is also copied to the FIB, which CEF consults for its switching decisions.

Load balancing can be configured in two different modes. The first mode is load balancing based on the destination (called per-destination load balancing); the second mode is based on the packet (called per-packet load balancing). Per-destination load balancing is on by default and must be turned off to enable per-packet load balancing.

Accounting may also be configured for CEF, which furnishes you with detailed statistics about CEF traffic. Two specifications can be made when collecting CEF statistics:

  • Collect information on traffic that’s forwarded to a specific destination.

  • Collect statistics for traffic that’s forwarded through a specific destination.

CEF was designed for large networks. If reliable and redundant switching paths are necessary, CEF is the way to go. However, keep in mind that its hardware requirements are significant, and it lacks support for many Cisco IOS features.




CCDA. Cisco Certified Design Associate Study Guide
CCDA: Cisco Certified Design Associate Study Guide, 2nd Edition (640-861)
ISBN: 0782142001
EAN: 2147483647
Year: 2002
Pages: 201

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net