Switching Services


Layer 2 switching is hardware based, which means it uses the MAC address from the host’s NIC cards to filter the network. Unlike bridges that use software to create and manage a filter table, switches use application- specific integrated circuits (ASICs) to build and maintain their filter tables. But it’s still okay to think of a Layer 2 switch as a multiport bridge because their basic reason for being is the same—to break up collision domains.

Layer 2 switches and bridges are faster than routers because they don’t take up time looking at the Network layer header information. Instead, they look at the frame’s hardware addresses before deciding to either forward the frame or drop it.

Layer 2 switching provides the following:

  • Hardware-based bridging media access control (MAC)

  • Wire speed

  • Low latency

  • Low cost

What makes Layer 2 switching so efficient is that no modification to the data packet takes place. The device only reads the frame encapsulating the packet, which makes the switching process considerably faster and less error-prone than routing processes are.

And if you use Layer 2 switching for both workgroup connectivity and network segmentation (breaking up collision domains), you can create a flatter network design with more network segments than you can with traditional 10BaseT shared networks.

Plus, Layer 2 switching increases bandwidth for each user because, again, each connection (interface) into the switch is its own collision domain. This feature makes it possible for you to connect multiple devices to each interface.

The Limitations of Layer 2 Switching

Since users commonly stick Layer 2 switching into the same category as bridged networks, you might tend to think it has the same hang-ups and issues that bridged networks do. Keep in mind that bridges are good and helpful things if you design the network correctly, keeping their features and limitations in mind. And to design well with bridges, the two most important considerations are

  • You absolutely must break up the collision domains correctly.

  • The right way to create a functional bridged network is to make sure that its users spend 80 percent of their time on the local segment.

Bridged networks break up collision domains, but remember, that network is still one large broadcast domain. Both Layer 2 switches and bridges shouldn’t break up broadcast domains, something that not only limits your network’s size and growth potential but also reduces its overall performance. Broadcasts and multicasts, along with the slow convergence time of the Spanning-Tree Protocol, can give you some major grief as your network grows. These are the major reasons why Layer 2 switches and bridges cannot completely replace routers (Layer 3 devices) in the internetwork.

Bridging versus LAN Switching

It’s true—Layer 2 switches really are pretty much just bridges that give you many more ports, but there are some important differences you should always keep in mind:

  • Bridges are software based, while switches are hardware based because they use an ASIC chip to help make filtering decisions.

  • Bridges can only have one spanning-tree instance per bridge, while switches can have many. (I’m going to tell you all about the Spanning- Tree Protocol in a bit.)

  • Bridges can only have up to 16 ports—max! A switch can have hundreds.

Three Switch Functions at Layer 2

There are three distinct functions of Layer 2 switching; be sure to remember these—you’ll need them for the exam:

  • Address learning

  • Forward/filter decisions

  • Loop avoidance

Address Learning

With address learning, Layer 2 switches and bridges remember the source hardware address of each frame received on an interface, and they enter this information into a MAC database called a forward/filter table. When a switch is first powered on, the MAC forward/filter table is empty, as you can see in Figure 2.4.

click to expand
Figure 2.4: Empty forward/filter table on a switch

When a device transmits and an interface receives a frame, the switch places the frame’s source address in the MAC forward/filter table, allowing it to remember which interface the sending device is located on. The switch then has no choice but to flood the network with this frame because it has no idea where the destination device is actually located.

If a device answers this broadcast and sends a frame back, then the switch takes the source address from that frame and places that MAC address in its database as well, associating this address with the interface that received the frame. Since the switch now has both of the relevant MAC addresses in its filtering table, the two devices can now make a point-to- point connection. The switch doesn’t need to broadcast like it did the first time because now the frames can and will only be forwarded between the two devices. This is exactly the thing that makes Layer 2 switches better than hubs. In a hub network, all frames are forwarded out all ports every time— no matter what! Figure 2.5 shows the processes involved in building a MAC database.

click to expand
Figure 2.5: How switches learn hosts’ locations

In Figure 2.5, you can see four hosts attached to a switch. When the switch is powered on, it has nothing in its MAC address filter/forward table, just like in Figure 2.4. But when the hosts start communicating, the switch places the source hardware address of each frame in the table along with the port to which the frame’s address corresponds.

Let me give you an example of how a forward/filter table is populated:

  1. Host A sends a frame to Host B. Host A’s MAC address is 0000.8c01.000A; Host B’s MAC address is 0000.8c01.000B.

  2. The switch receives the frame on the E0/0 interface and places the source address in the MAC address table.

    Note

    Switch interface addressing is covered in Appendix B, “Solutions to Case Studies.”

  3. Since the destination address is not in the MAC database, the frame is forwarded out all interfaces.

  4. Host B receives the frame and responds to Host A. The switch receives this frame on interface E0/1 and places the source hardware address in the MAC database.

  5. Host A and Host B can now make a point-to-point connection, and only those two devices receive the frames. Hosts C and D do not see the frames, nor are their MAC addresses found in the database because they haven’t yet sent a frame to the switch.

If Host A and Host B don’t communicate to the switch again within a certain amount of time, the switch flushes their entries from the database to keep it as current as possible.

Forward/Filter Decisions

When a frame is received on an interface, the switch looks at the destination hardware address and finds the exit interface in the MAC database. The frame is only forwarded out the specified destination port.

When a frame arrives at a switch interface, the destination hardware address is compared to the forward/filter MAC database. If the destination hardware address is known and listed in the database, the frame is only sent out the correct exit interface. The switch doesn’t transmit the frame out any interface except for the destination interface. This preserves bandwidth on the other network segments and is called frame filtering.

If the destination hardware address is not listed in the MAC database, then the frame is broadcast out all active interfaces except the interface on which the frame was received. If a device answers the broadcast, the MAC database is updated with the device’s location (interface).

If a host or server sends a broadcast on the LAN, the switch broadcasts the frame out all active ports by default. Remember that the switch only creates smaller collision domains, but it’s still one large broadcast domain by default.

Loop Avoidance

If multiple connections between switches are created for redundancy purposes, network loops can occur. The Spanning-Tree Protocol (STP) is used to stop network loops while still permitting redundancy. Redundant links between switches are a good idea because they help prevent complete network failures in the event that one link stops working.

It sounds great, but even though redundant links can be extremely helpful, they often cause more problems than they solve. This is because frames can be broadcast down all redundant links simultaneously, creating network loops as well as other evils. Here’s a list of some of the ugliest problems:

  • If no loop avoidance schemes are put in place, the switches flood broadcasts endlessly throughout the internetwork. This is sometimes referred to as a broadcast storm. (But most of the time it’s referred to in ways we’re not permitted to repeat in print!) Figure 2.6 illustrates how a broadcast can be propagated throughout the network. Observe how a frame is continually being broadcast through the internetwork’s physical network media.

    click to expand
    Figure 2.6: Broadcast storm

  • A device can receive multiple copies of the same frame since that frame can arrive from different segments at the same time. Figure 2.7 demonstrates how a large number of frames can arrive from multiple segments simultaneously. The server in this figure sends a unicast frame to Router C. Since it’s a broadcast, Switch A forwards the frame, and Switch B provides the same service—it forwards the broadcast. This is bad because it means that Router C receives that unicast frame twice, causing additional overhead on the network.

    click to expand
    Figure 2.7: Multiple frame copies

  • You may have thought of this one: The MAC address filter table is totally confused about the device’s location because the switch can receive the frame from more than one link. And what’s more, the bewildered switch could get so caught up in constantly updating the MAC filter table with source hardware address locations that it fails to forward a frame! This is called thrashing the MAC table.

  • One of the nastiest things that can happen is multiple loops generating throughout an internetwork. This means that loops can occur within other loops, and if a broadcast storm also occurs, the network won’t be able to perform switching—period!

All of these problems spell “hosed” or “pretty much hosed” and are decidedly evil situations that must be avoided, or at least fixed somehow. That’s where the Spanning-Tree Protocol comes into the game. It was developed to solve each and every one of the problems I just told you about.

80/20 Rule

The traditional campus network placed users and groups in the same physical location. If a new salesperson was hired, they had to sit in the same physical location as the other sales personnel and be connected to the same physical network segment in order to share network resources. Any deviation from this caused major headaches for the network administrators.

The rule that needed to be followed in this type of network was called the 80/20 rule because 80 percent of the users’ traffic was supposed to remain on the local network segment and only 20 percent or less was supposed to cross the routers or bridges to the other network segments. If more than 20 percent of the traffic crossed the network segmentation devices, performance issues arose. Figure 2.8 illustrates a traditional 80/20 network.

click to expand
Figure 2.8: A traditional 80/20 network

Because network administrators are responsible for network design and implementation, they improved network performance in the 80/20 network by making sure all the network resources for the users were contained within their own network segment. These resources included network servers, printers, shared directories, software programs, and applications.

The 80/20 rule is a foundation for traditional network design, and the campus-wide virtual LAN (VLAN) model relies on it heavily. When 80 percent of the traffic is within a workgroup (VLAN), then 80 percent of the packets flowing from the client to the server are switched locally. The logical workgroup is dispersed in a campus-wide VLAN, but is still organized so that 80 percent of traffic is contained within it. The 20 percent that’s left leaves the network or subnet through a router.

The New 20/80 Rule

Many of you have probably seen distributed data storage and retrieval cropping up in new and existing applications. In these instances, traffic patterns are moving in an opposite direction, toward a principle known as the 20/80 rule. In this scenario, only 20 percent of traffic is local to the workgroup LAN, and 80 percent of the traffic leaves the workgroup.

In a traditional network design (where the 80/20 rule is in force), only a small amount of traffic passes through L3 devices. These devices are typically routers because issues of performance rarely arise. However, the newer enterprise networks use servers located in the enterprise edge or in server farms. With increased traffic from clients to these distant servers, performance now becomes an issue. Devices with high-speed, L3 processing are necessary to handle higher requirements in the building distribution and campus backbone.

With new web-based applications and computing, any PC can be a subscriber or publisher at any time. Also, because businesses are pulling servers from remote locations and creating server farms (sounds like a mainframe, doesn’t it?) to centralize network services for security, reduced cost, and administration, the old 80/20 rule is obsolete and could not possibly work in this environment. All traffic must now traverse the campus backbone, which means you now have the 20/80 rule in effect. Figure 2.9 shows the new 20/80 network.

click to expand
Figure 2.9: A 20/80 network

The problem with the 20/80 rule is not the network wiring and topology as much as it is the routers themselves. They must be able to handle an enormous number of packets quickly and efficiently at wire speed. This is probably where you should be talking about how great Cisco routers are and how your networks would be nothing without them. We’ll get to that later in this chapter—trust us.

Layer 2 (L2) Multicasting

The concept of forming a multicast group is the basis for IP multicast. This means that any group or collection of receivers can indicate interest in receiving a particular stream of data. The group itself isn’t limited by physical or departmental boundaries, and the hosts can be located anywhere on the network as well. Hosts join the group by using the Internet Group Management Protocol (IGMP) in order to receive data going to that group.

Multicast routing protocols such as Protocol Independent Multicast (PIM) guide the delivery of traffic through multicast-enabled routers. The router to the switch port forwards the incoming multicast stream.

One problem is that the default for an L2 switch is to forward all multicast traffic to every port that belongs to the same VLAN on the switch. Talk about a monkey wrench in the works. This activity defeats the very purpose of the switch, which is to send traffic only to the ports that need to receive the data. Bummer!

The good news is that there are several ways Cisco switches can circumvent this little problem. Ones commonly used include:

Cisco Group Management Protocol (CGMP) A Cisco proprietary solution found on all Cisco LAN switches. The multicast receiver registration (using the IGMP) is accepted by the router and communicated by CGMP to the switch; the switch updates its forwarding table with that information.

IGMP snooping The switch intercepts multicast receiver registrations and updates the forwarding table with that information. The IGMP snooping means that the switch is aware of L3 because IGMP is a Network layer protocol. Typically, the IGPM packet recognition is hardware assisted.

Quality of Service (QoS)

Have you had any problems with a utility, with an airline, or with a particular retailer lately? Lots of corporations promise quality of service; darn few deliver on what their brand pledges they will provide you. In networking, QoS has to do with the proper handling of traffic. Fortunately, Cisco has a track record of reliability in this area.

Because they don’t have knowledge of L3 or higher information, access switches provide QoS based only on the L2 or input port. This allows you to define traffic from a particular host as high-priority traffic on the uplink port. The scheduling feature on the output port of an access switch ensures that traffic from such ports is served first. If input traffic is properly marked, the expected service when traffic passes distribution and core layer switches is then assured.

Better yet, because distribution and core layer switches are typically L3- aware, they can provide QoS on a port basis and also using higher layer parameters, such as port numbers, IP addresses, or QoS bits in the IP packet. These devices are more finely tuned to differences in traffic based on the application and so are able make QoS more exclusive. Though QoS in distribution and core switches must be provided in both directions of flow, policing is usually enabled on the distribution layer devices.

Your goal in QoS for Voice over IP (VoIP) is to provide for packet loss and delay within parameters that do not affect voice quality. One obvious solution is to provide enough bandwidth at all points in the network, but that’ll cost you. Or you can apply a QoS mechanism at the stressed-out points in the network. (Counseling usually doesn’t work in these situations. Just kidding!)

The end-to-end network delay at 150 milliseconds is not noticeable to the parties speaking, making it a pretty good design choice for QoS in VoIP. If you want guaranteed low delay for voice at campus speeds, you can set up a separate outbound queue for real-time traffic. Since burst-prone data traffic, such as file transfers, uses a different queue, packet loss is not an issue because the separate queue for voice guarantees low delay.

QoS is the princess’ foot in the glass slipper where multilayer campus design is concerned. The entrance to the network is the wiring-closet switch (access switch). Packet classification is a multilayer service applied at this switch. A characteristic port number recognizes VoIP traffic flows. An IP type of service (ToS) value indicating “low delay voice” defines VoIP packets. If VoIP packets en- counter congestion in the network, the local switch or router applies appropriate congestion management based on the ToS value. And that’s a wrap for QoS.




CCDA. Cisco Certified Design Associate Study Guide
CCDA: Cisco Certified Design Associate Study Guide, 2nd Edition (640-861)
ISBN: 0782142001
EAN: 2147483647
Year: 2002
Pages: 201

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net