Foundation Topics


Modular Network Design

Recall from Chapter 1, "Campus Network Overview," that a network is best constructed and maintained using a three-tiered hierarchical approach. Making a given network conform to a layered architecture might seem a little confusing.

You can design a campus network in a logical manner, using a modular approach. In this approach, each layer of the hierarchical network model can be broken into basic functional units. These units, or modules, then can be sized appropriately and connected, while allowing for future scalability and expansion.

You can divide enterprise campus networks into the following basic elements:

  • Switch block A group of access-layer switches, together with their distribution switches

  • Core block The campus network's backbone

Other related elements can exist. Although these elements don't contribute to the campus network's overall function, they can be designed separately and added to the network design. These elements are as follows:

  • Server farm block A group of enterprise servers, along with their access and distribution (layer) switches.

  • Management block A group of network-management resources, along with their access and distribution switches.

  • Enterprise edge block A collection of services related to external network access, along with their access and distribution switches.

  • Service provider edge block The external network services contracted or used by the enterprise network. These are the services with which the enterprise edge block interfaces.

The collection of all these elements is also known as the Enterprise Composite Network Model. Figure 2-1 shows a modular campus design's basic structure. Notice how each of the building-block elements can be confined to a certain area or function. Also notice how each is connected into the core block.

Figure 2-1. Modular Approach to Campus Network Design


Switch Block

Recall how a campus network is divided into access, distribution, and core layers. The switch block contains switching devices from the access and distribution layers. All switch blocks then connect into the core block, providing end-to-end connectivity across the campus.

Switch blocks contain a balanced mix of Layer 2 and Layer 3 functionality, as might be present in the access and distribution layers. Layer 2 switches located in wiring closets (access layer) connect end users to the campus network. With one end user per switch port, each user receives dedicated bandwidth access.

Upstream, each access-layer switch connects to devices in the distribution layer. Here, Layer 2 functionality transports data among all connected access switches at a central connection point. Layer 3 functionality also can be provided in the form of routing and other networking services (security, quality of service [QoS], and so on). Therefore, a distribution-layer device should be a multilayer switch. Layer 3 functionality is discussed in more detail in Chapter 13, "Multilayer Switching."

The distribution layer also shields the switch block from certain failures or conditions in other parts of the network. For example, broadcasts are not propagated from the switch block into the core and other switch blocks. Therefore, the Spanning Tree Protocol (STP) is confined to each switch block, where a virtual LAN (VLAN) is bounded, keeping the spanning tree domain well defined and controlled.

Access-layer switches can support VLANs by assigning individual ports to specific VLAN numbers. In this way, stations connected to the ports configured for the same VLAN can share the same Layer 3 subnet. However, be aware that a single VLAN can support multiple subnets. Because the switch ports are configured for a VLAN number only (and not a network address), any station connected to a port can present any subnet address range. The VLAN functions as traditional network media and allows any network address to connect.

In this network design model, you should not extend VLANs beyond distribution switches. The distribution layer always should be the boundary of VLANs, subnets, and broadcasts. Although Layer 2 switches can extend VLANs to other switches and other layers of the hierarchy, this activity is discouraged. VLAN traffic should not traverse the network core. (Trunking, or the capability to carry many VLANs over a single connection, is discussed in Chapter 6, "VLANs and Trunks.")

Sizing a Switch Block

Containing access- and distribution-layer devices, the switch block is simple in concept. You should consider several factors, however, to determine an appropriate size for the switch block. The range of available switch devices makes the switch block size very flexible. At the access layer, switch selection usually is based on port density or the number of connected users.

The distribution layer must be sized according to the number of access-layer switches that are collapsed or brought into a distribution device. Consider the following factors:

  • Traffic types and patterns

  • Amount of Layer 3 switching capacity at the distribution layer

  • Number of users connected to the access-layer switches

  • Geographical boundaries of subnets or VLANs

  • Size of spanning-tree domains

Designing a switch block based solely on the number of users or stations contained within the block is usually inaccurate. Usually, no more than 2000 users should be placed within a single switch block. Although this is useful for initially estimating a switch block's size, this idea doesn't take into account the many dynamic processes that occur on a functioning network.

Instead, switch block size should be based primarily on the following:

  • Traffic types and behavior

  • Size and number of common workgroups

Because of the dynamic nature of networks, you can size a switch block too large to handle the load that is placed upon it. Also, the number of users and applications on a network tends to grow over time. A provision to break up or downsize a switch block is necessary. Again, base these decisions on the actual traffic flows and patterns present in the switch block. You can estimate, model, or measure these parameters with network-analysis applications and tools.

Note

The actual network-analysis process is beyond the scope of this book. Traffic estimation, modeling, and measurement are complex procedures, each requiring its own dedicated analysis tool.


Generally, a switch block is too large if the following conditions are observed:

  • The routers (multilayer switches) at the distribution layer become traffic bottlenecks. This congestion could be because of the volume of interVLAN traffic, intensive CPU processing, or switching times required by policy or security functions (access lists, queuing, and so on).

  • Broadcast or multicast traffic slows the switches in the switch block. Broadcast and multicast traffic must be replicated and forwarded out many ports. This process requires some overhead in the multilayer switch, which can become too great if significant traffic volumes are present.

Access switches can have one or more redundant links to distribution-layer devices. This situation provides a fault-tolerant environment in which access layer connectivity is preserved on a secondary link if the primary link fails. In fact, because Layer 3 devices are used in the distribution layer, traffic can be load-balanced across both redundant links using redundant gateways.

Generally, you should provide two distribution switches in each switch block for redundancy, with each access-layer switch connecting to the two distribution switches. Then, each Layer 3 distribution switch can load-balance traffic over its redundant links into the core layer (also Layer 3 switches) using routing protocols.

Figure 2-2 shows a typical switch block design. At Layer 3, the two distribution switches can use one of several redundant gateway protocols to provide an active IP gateway and a standby gateway at all times. These protocols are discussed in Chapter 14, "Router Redundancy and Load Balancing."

Figure 2-2. Typical Switch Block Design


Core Block

A core block is required to connect two or more switch blocks in a campus network. Because all traffic passing to and from all switch blocks, server farm blocks, and the enterprise edge block must cross the core block, the core must be as efficient and resilient as possible. The core is the campus network's basic foundation and carries much more traffic than any other block.

A network core can use any technology (frame, cell, or packet) to transport campus data. Many campus networks use Gigabit and 10-Gigabit Ethernet as a core technology. Ethernet core blocks are reviewed at length here.

Recall that both the distribution and core layers provide Layer 3 functionality. Individual IP subnets connect all distribution and core switches. At least two subnets should be used to provide resiliency and load balancing into the core, although you can use a single VLAN. As VLANs end at the distribution layer, they are routed into the core.

The core block might consist of a single multilayer switch, taking in the two redundant links from the distribution-layer switches. Because of the importance of the core block in a campus network, you should implement two or more identical switches in the core to provide redundancy.

The links between layers also should be designed to carry at least the amount of traffic load handled by the distribution switches. The links between core switches in the same core subnet should be of sufficient size to carry the aggregate amount of traffic coming into the core switch. Consider the average link utilization, but allow for future growth. An Ethernet core allows simple and scalable upgrades of magnitude; consider the progression from Ethernet to Fast Ethernet to Fast EtherChannel to Gigabit Ethernet to Gigabit EtherChannel, and so on.

Two basic core block designs are presented in the following sections, each designed around a campus network's size:

  • Collapsed core

  • Dual core

Collapsed Core

A collapsed core block is one in which the hierarchy's core layer is collapsed into the distribution layer. Here, both distribution and core functions are provided within the same switch devices. This situation usually is found in smaller campus networks, where a separate core layer (and additional cost or performance) is not warranted.

Figure 2-3 shows the basic collapsed core design. Although the distribution- and core-layer functions are performed in the same device, keeping these functions distinct and properly designed is important. Note also that the collapsed core is not an independent building block but is integrated into the distribution layer of the individual standalone switch blocks.

Figure 2-3. Collapsed Core Block Design


In the collapsed core design, each access-layer switch has a redundant link to each distribution- and core-layer switch. All Layer 3 subnets present in the access layer terminate at the distribution switches' Layer 3 ports, as in the basic switch block design. The distribution and core switches connect to each other by one or more links, completing a path to use during a redundancy failover.

Connectivity between the distribution and core switches is accomplished using Layer 3 links (Layer 3 switch interfaces, with no inherent VLANs). The Layer 3 switches route traffic to and from each other directly. Figure 2-3 shows the extent of two VLANs. Notice that VLAN A and VLAN B each extend only from the access-layer switches where their respective users are located, down to the distribution layer over the Layer 2 uplinks. The VLANs terminate there because the distribution layer uses Layer 3 switching. This is good because it limits the broadcast domains, removes the possibility of Layer 2 bridging loops, and provides fast failover if one uplink fails.

At Layer 3, redundancy is provided through a redundant gateway protocol for IP (covered in Chapter 14). In some of the protocols, the two distribution switches provide a common default gateway address to the access-layer switches, but only one is active at any time. In other protocols, the two switches can both be active, load-balancing traffic. If a distribution and core switch failure occurs, connectivity to the core is maintained because the redundant Layer 3 switch is always available.

Dual Core

A dual core connects two or more switch blocks in a redundant fashion. Although the collapsed core can connect two switch blocks with some redundancy, the core is not scalable when more switch blocks are added. Figure 2-4 illustrates the dual core. Notice that this core appears as an independent module and is not merged into any other block or layer.

Figure 2-4. Dual Network Core Design


In the past, the dual core usually was built with Layer 2 switches to provide the simplest and most efficient throughput. Layer 3 switching was provided in the distribution layer. Multilayer switches now have become cost-effective and offer high switching performance. Building a dual core with multilayer switches is both possible and recommended. The dual core uses two identical switches to provide redundancy. Redundant links connect each switch block's distribution-layer portion to each of the dual core switches. The two core switches connect by a common link. In a Layer 2 core, the switches cannot be linked to avoid any bridging loops. A Layer 3 core uses routing rather than bridging, so bridging loops are not an issue.

In the dual core, each distribution switch has two equal-cost paths to the core, allowing the available bandwidth of both paths to be used simultaneously. Both paths remain active because the distribution and core layers use Layer 3 devices that can manage equal-cost paths in routing tables. The routing protocol in use determines the availability or loss of a neighboring Layer 3 device. If one switch fails, the routing protocol reroutes traffic using an alternate path through the remaining redundant switch.

Notice again in Figure 2-4 the extent of the access VLANs. Although Layer 3 devices have been added into a separate core layer, VLANs A and B still extend only from the Layer 2 access-layer switches down to the distribution layer. Although the distribution-layer switches use Layer 3 switch interfaces to provide Layer 3 functionality to the access layer, these links actually pass traffic only at Layer 2.

Core Size in a Campus Network

The dual core is made up of redundant switches and is bounded and isolated by Layer 3 devices. Routing protocols determine paths and maintain the core's operation. As with any network, you must pay some attention to the overall design of the routers and routing protocols in the network. Because routing protocols propagate updates throughout the network, network topologies might be undergoing change. The network's size (the number of routers) then affects routing protocol performance as updates are exchanged and network convergence takes place.

Although the network shown previously in Figure 2-4 might look small, with only two switch blocks of two Layer 3 switches (route processors within the distribution-layer switches) each, large campus networks can have many switch blocks connected into the core block. If you think of each multilayer switch as a router, you will recall that each route processor must communicate with and keep information about each of its directly connected peers. Most routing protocols have practical limits on the number of peer routers that can be directly connected on a point-to-point or multiaccess link. In a network with a large number of switch blocks, the number of connected routers can grow quite large. Should you be concerned about a core switch peering with too many distribution switches?

No, because the actual number of directly connected peers is quite small, regardless of the campus network size. Access-layer VLANs terminate at the distribution-layer switches. The only peering routers at that boundary are pairs of distribution switches, each providing routing redundancy for each of the access-layer VLAN subnets. At the distribution and core boundary, each distribution switch connects to only two core switches over Layer 3 switch interfaces. Therefore, only pairs of router peers are formed.

When multilayer switches are used in the distribution and core layers, the routing protocols running in both layers regard each pair of redundant links between layers as equal-cost paths. Traffic is routed across both links in a load-sharing fashion, utilizing the bandwidth of both.

One final core-layer design point is to scale the core switches to match the incoming load. At a minimum, each core switch must handle switching each of its incoming distribution links at 100 percent capacity.

Other Building Blocks

Other resources in the campus network can be identified and pulled into the building block model. For example, a server farm can be made up of servers running applications that users from all across the enterprise access. Most likely, those servers need to be scalable for future expansion, need to be highly accessible, and need to benefit from traffic and security policy control.

To meet these needs, you can group the resources into building blocks that are structured and placed just like regular switch block modules. These blocks should have a distribution layer of switches and redundant uplinks directly into the core layer, and should contain enterprise resources.

A list of the most common examples follows. Refer back to Figure 2-1 to see how each of these is grouped and connected into the campus network. Most of these building blocks are present in medium and large campus networks. Be familiar with the concept of pulling an enterprise function into its own switch block, as well as the structure of that block.

Server Farm Block

Any server or application accessed by most of the enterprise users usually already belongs to a server farm. The entire server farm can be identified as its own switch block and given a layer of access switches uplinked to dual distribution switches (multilayer). Connect these distribution switches into the core layer with redundant high-speed links.

Individual servers can have single network connections to one of the distribution switches. However, this presents a single point of failure. If a redundant server is used, it should connect to the alternate distribution switch. Another more resilient approach is to give each server dual network connections, one going to each distribution switch. This is known as dual-homing the servers.

Examples of enterprise servers include corporate e-mail, intranet services, Enterprise Resource Planning (ERP) applications, and mainframe systems. Notice that each of these is an internal resource that normally would be located inside a firewall or secured perimeter.

Network Management Block

Often campus networks must be monitored through the use of network-management tools so that performance and fault conditions can be measured and detected. You can group the entire suite of network-management applications into a single network management switch block. This is the reverse of a server farm block because the network-management tools are not enterprise resources accessed by most of the users. Instead, these tools go out to access other network devices, application servers, and user activity in all other areas of the campus network.

The network management switch block usually has a distribution layer that connects into the core switches. Because these tools are used to detect equipment and connectivity failures, availability is important. Redundant links and redundant switches should be used.

Examples of network-management resources in this switch block include the following:

  • Network-monitoring applications

  • System logging (syslog) servers

  • Authentication, authorization, and accounting (AAA) servers

  • Policy-management applications

  • System administration and remote-control services

  • Intrusion-detection management applications

Note

You can easily gather network-management resources into a single switch block to centralize these functions. Each switch and router in the network must have an IP address assigned for management purposes. In the past, it was easy to "centralize" all these management addresses and traffic into a single "management" VLAN, which extended from one end of the campus to the other.

The end-to-end VLAN concept is now considered a poor practice. VLANs should be isolated, as described in Chapter 1. Therefore, assigning management addresses to as many VLANs or subnets as is practical and appropriate for a campus network is now acceptable.


Enterprise Edge Block

At some point, most campus networks must connect to service providers for access to external resources. This is usually known as the edge of the enterprise or campus network. These resources are available to the entire campus and should be centrally accessible as an independent switch block connected to the network core.

Edge services usually are divided into these categories:

  • Internet access Supports outbound traffic to the Internet, as well as inbound traffic to public services, such as e-mail and extranet web servers. This connectivity is provided by one or more Internet service providers (ISP). Network security devices generally are placed here.

  • Remote access and VPN Supports inbound dialup access for external or roaming users through the Public Switched Telephone Network (PSTN). If voice traffic is supported over the campus network, Voice over IP (VoIP) gateways connect to the PSTN here. In addition, virtual private network (VPN) devices connected to the Internet support secure tunneled connections to remote locations.

  • E-commerce Supports all related web, application, and database servers and applications, as well as firewalls and security devices. This switch block connects to one or more ISPs.

  • WAN access Supports all traditional WAN connections to remote sites. This can include Frame Relay, ATM, leased line, ISDN, and so on.

Service Provider Edge Block

Each service provider that connects to an enterprise network must also have a hierarchical network design of its own. A service provider network meets an enterprise at the service provider edge, connecting to the enterprise edge block.

Studying a service provider network's structure isn't necessary because it should follow the same design principles presented here. In other words, a service provider is just another enterprise or campus network itself. Just be familiar with the fact that a campus network has an edge block, where it connects to the edge of each service provider's network.

Can I Use Layer 2 Distribution Switches?

This chapter covers the best practice design that places Layer 3 switches at both the core and distribution layers. What would happen if you could not afford Layer 3 switches at the distribution layer?

Figure 2-5 shows the dual-core campus network with Layer 2 distribution switches. Notice how each access VLAN extends not only throughout the switch block but also into the core. This is because the VLAN terminates at a Layer 3 boundary present only in the core. As an example, VLAN A's propagation is shaded in the figure.

Figure 2-5. Design Using Layer 2 Distribution Switches


Here are some implications with this design:

  • Redundant Layer 3 gateways still can be used in the core.

  • Each VLAN propagates across the redundant trunk links from the access to the core layers. Because of this, Layer 2 bridging loops form.

  • The STP must run in all layers to prevent Layer 2 loops. This causes traffic on some links to be blocked. As a result, only one of every two access-layer switch uplinks can be used at any time.

  • When Layer 2 uplinks go down, the STP can take several seconds to unblock redundant links, causing downtime.

  • Access VLANs can propagate from one end of the campus to the other, if necessary.

  • Broadcast traffic on any access-layer VLAN also reaches into the core layer. Bandwidth on uplinks and within the core can be wasted unnecessarily.

Evaluating an Existing Network

If you are building an enterprise network from scratch, you might find that it is fairly straightforward to build it in a hierarchical fashion. After all, you can begin with switches in the core layer and fan out into lower layers to meet the users, server farms, and service providers.

In the real world, you might be more likely to find existing networks that need an overhaul to match the hierarchical model. Hopefully, if you are redesigning your own network, you already know its topology and traffic patterns. If you are working on someone else's network, you might not know about its structure.

This section provides some basic information on two tasks:

  • Discovering the existing topology

  • Planning a migration to a better campus model

Discovering the Network Topology

Whether a diagram of a network is available or not, you should consider tracing out the topology for yourself. For one thing, network documentation tends to become out-of-date or isn't drawn to show the type of information you need.

Some network administrators draw up a diagram that shows only the physical cabling between network devices. That might benefit someone who is working with the cabling, but it might not show any of the logical aspects of the network. After all, switched networks can be cabled together and then configured into many logical topologies.

As you discover or trace out a network, you might end up building several diagrams. One diagram might show all the network devices and only the physical cabling between them. Further diagrams might show Layer 2 virtual LANs (VLANs) and how they extend through the network.

To discover an existing network, you can connect a computer to any switch as a starting point and begin to "walk" the topology. Cisco devices periodically send information about themselves to any neighboring devices. This is done with the Cisco Discovery Protocol (CDP), which is covered in more detail in the "Inter-Switch CommunicationCisco Discovery Protocol" section in Chapter 4, "Switch Configuration."

CDP is enabled by default on all Cisco switches and routers, so, chances are, you will be able to make use of it right away. With CDP, a switch becomes aware of only the devices that are directly connected to it. Therefore, you walk the topology one "hop" at a time: connect to one switch, find its neighbors, and then connect to them one at a time.

Figure 2-6 shows this process being used to discover an example network. (The arrows in the sequence illustrated in Figure 2-6 point out where you are positioned as the topology is discovered.) A laptop PC has been connected to the console connection of an arbitrary switch, Switch-A. Here, Switch-A is a Catalyst 3550, determined either by inspection or from the show version command.

Figure 2-6. Network Discovery with CDP


At the top of the figure, you don't know whether Switch-A is in the core, distribution, or access layer. Actually, you don't even know whether this network has been built in layers.

When you are connected and in the privileged EXEC or enable mode, you can begin looking for CDP information by using the show cdp neighbors command. At Switch-A, suppose the command had the output in Example 2-1.

Example 2-1. show cdp neighbors Command Output Reveals CDP Information
Switch-A# show cdp neighbors Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge                   S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone Device ID         Local Intrfce    Holdtme     Capability  Platform Port ID Switch-B          Gig 0/1           152          R S I     WS-C4506 Gig 1/1 Switch-A#

Based on the neighbors listed, you should be able to draw the connections to the neighboring switches and detail the names and model of those switches. Notice that the CDP neighbor information shows the local switch interface as well as the neighbor's interface for each connection. This is helpful when you move to a neighbor and need to match the connections from its viewpoint.

From the output in Example 2-1, it's apparent that Switch-A has a neighbor called Switch-B on interface GigabitEthernet 0/1. Switch-B is a Catalyst 4506.

Now you can use a variation of the command to see more detail about each neighbor. The show cdp neighbors [interface mod/num] detail command also shows the neighbor's software release, interface settings, and its IP address, as demonstrated in Example 2-2.

Example 2-2. show cdp neighbors detail Command Output Reveals Detailed Information About Neighboring Switches
Switch-A# show cdp neighbors detail ------------------------- Device ID: Switch-B Entry address(es): 192.168.254.17 Platform: cisco WS-C4506,  Capabilities: Router Switch IGMP Interface: GigabitEthernet0/1, Port ID (outgoing port): GigabitEthernet1/1 Holdtime : 134 sec Version : Cisco Internetwork Operating System Software IOS (tm) Catalyst 4000 L3 Switch Software (cat4000-I9S-M), Version 12.2(18)EW, EARLY DEPLOYMENT RELEASE SOFTWARE (fc1) TAC Support: http://www.cisco.com/tac Copyright  1986-2004 by cisco Systems, Inc. Compiled Fri 30-Jan-04 02:04 by hqluong advertisement version: 2 VTP Management Domain: '' Duplex: full Management address(es): Switch-A#

When you know the IP address of a neighboring device, you can open a Telnet session from the current switch to the neighboring switch. (This assumes that the neighboring switch has been configured with an IP address and a Telnet password on its vty lines.) Choose a neighbor and use the telnet ip-address command to move to the neighbor and continue your discovery. At Switch-B (the middle of Figure 2-6), you might see the CDP neighbor output in Example 2-3.

Example 2-3. show cdp neighbors Command Output Display for Switch-B
Switch-B# show cdp neighbors Capability Codes: R - Router,  T - Trans Bridge, B - Source Route Bridge                  S - Switch,  H - Host, I - IGMP, r - Repeater, P - Phone Device ID         Local Intrfce     Holdtme     Capability Platform  Port ID Switch-A          Gig 1/1            105           S I     WS-C3550-4Gig 0/1 Switch-C          Gig 2/1            139           S I     WS-C3550-4Gig 0/1 Router            Gig 3/1            120           R       Cisco 2610Fas 0/0

Next, the show cdp neighbors detail command reveals that Switch-C has the IP address 192.168.254.199, so you can open a Telnet session there. Switch-C might show only one neighbor (Switch-B), so you have reached the end of the switched network topology. At the bottom portion of Figure 2-6, the physical network has been discovered and drawn.

Tip

You should assess the utilization or bandwidth used over various connections in the network. This is especially true of switch-to-switch linksif they are heavily used, you might want to plan for expansion. You also might want to get an idea of the total traffic being passed to and from individual server or user connections.

You can do this by using a network or protocol analyzer that is set up to monitor specific switch interfaces. However, you can get a quick snapshot of average traffic volumes with the show interfaces command. A switch maintains a running 5-minute average of traffic rates into and out of each interface. The output from show interfaces displays this information along with a host of other interface statistics.

To see only the interfaces that are in use and only the input and output data rates, you can add a filter to that command:

show interfaces | include (is up | rate)

This produces output similar to the following:

Switch# show interfaces | include (is up | rate) GigabitEthernet2/1 is up, line protocol is up (connected)  5 minute input rate 63000 bits/sec, 34 packets/sec  5 minute output rate 901000 bits/sec, 168 packets/sec GigabitEthernet2/2 is up, line protocol is up (connected)  5 minute input rate 0 bits/sec, 0 packets/sec  5 minute output rate 194000 bits/sec, 80 packets/sec GigabitEthernet2/3 is up, line protocol is up (connected)  5 minute input rate 219000 bits/sec, 103 packets/sec  5 minute output rate 1606000 bits/sec, 265 packets/sec


You can discover many more detailed aspects of a network. For example, you might want to know the extent of various VLANs across the switches, which interfaces are acting as trunks, the spanning-tree topology for various VLANs, and so on.

These are all important things to consider in a network design and in troubleshooting a network, but they are beyond the scope of this chapter. These topics and the appropriate commands are presented in later chapters of this book.

Migrating to a Hierarchical Design

After you have discovered the topology of a network, you might find that it doesn't resemble the overall design goals that were presented earlier in this chapter. Perhaps it doesn't have a hierarchical layout with distinct layers. Or maybe you aren't able to see a modular layout with distinct switch blocks.

To move toward the campus hierarchical model, you also need to gather information about the traffic patterns crossing the network. For example, you should try to find answers to these questions:

  • Where are the enterprise resources (corporate e-mail, web, and intranet application servers) located?

  • Where are the end user communities located?

  • Where are the service provider connections to the Internet, remote sites, and VPN users located?

Following the example of Figure 2-6, these have been identified by interviewing system administrators and network staff. Figure 2-7 shows the locations of user groups and server resources. Notice that these seem to be scattered across the entire network and that there is no clear picture of a modular network.

Figure 2-7. Identifying User and Enterprise Resources


Now, you should add some structure to the design. Try to identify pieces of the network as specific modules. For example, the end user communities eventually will become switch block modules, containing both distribution- and access-layer switches. Redraw the network with the users and their switches toward the bottom.

Any resources related to connections to service providers, remote sites, or the Internet should be grouped and moved to become a service provider module or switch block. Enterprise servers, such as those in a data center, should be grouped and moved to become server farm switch blocks.

As you do this, a modular structure should begin to appear. Each module will connect into a central core layer, completing the hierarchical design. To see how the example of Figures 2-6 and 2-7 can be transformed, look at Figure 2-8. The existing switches have merely been moved so that they resemble the enterprise composite model. Without adding switches, the existing network has been migrated into the modular structure. Each module shown ultimately will become a switch block.

Figure 2-8. Migrating an Existing Network into a Modular Structure


Now, each module should be addressed so that it can be migrated into a proper switch block. Remember that switch blocks always contain the switches necessary to connect a resource (users, servers, and so on) into the core layer. If this is done for the network in Figure 2-8, the network shown in Figure 2-9 might result.

Figure 2-9. Migrating Network Modules into Switch Blocks


Notice that some additional switches have been added so that there is a distinct distribution layer of switches connecting into the core layer. Here, only single switches and single connections between switches have been shown. At this point, the design doesn't strictly follow the hierarchical model because there is little or no redundancy between layers.

Finally, you should add the redundant components to complete the design. The core should have dual switches. Each switch block should have dual distribution switches and dual links to both the access and core layers. These can be added now, resulting in the network shown in Figure 2-10. This might not be a practical design for a small example network, but a full-fledged hierarchical design stages the example network for growth and stability in the future.

Figure 2-10. Completing the Hierarchical Campus Design




CCNP Self-Study(c) CCNP BCMSN Exam Certification Guide
Red Hat Fedora 5 Unleashed
ISBN: N/A
EAN: 2147483647
Year: 2003
Pages: 177

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net