11.2 Switching Operations


11.2 Switching Operations

The development of intelligent switching hubs has its foundation, similar to many other areas of modern communications, in telephone technology. Shortly after the telephone was invented, the switchboard was developed to enable multiple simultaneous conversations to occur without requiring telephone wires to be installed in a complex matrix between subscribers. Later, telephone office switches were developed to route calls based upon the telephone number dialed , followed in a similar manner by the development of bridges in a LAN environment. Bridges can be considered to represent an elementary type of switch due to their limited number of ports and simplistic switching operation. That switching operation is based upon whether or not the destination address in a frame "read" on one port is known to reside on that port.

11.2.1 Bridge Switching

Figure 11.5 illustrates the basic operation of a bridge. If you compare the operations performed by a bridge with respect to each port, you will note they are nearly identical. The only difference concerns the port they forward frames to when the destination address of a frame is compared to a table of source addresses and no match occurs. When this situation occurs, the frame's destination is unknown. Thus, the bridge transmits copies of the frame onto all ports other than the port it was received on, a process referred to as flooding. If n networks are connected in serial via the use of n ˆ’ 1 bridges, and a frame is transmitted on the network at one end of the interconnected group of networks to the network on the opposite end of the interconnected group of LANs, each bridge would perform a similar forwarding operation until the frame traversed n bridges and was placed onto the last network in the interconnected series. The simplicity associated with the operation of bridges makes them a popular networking device. However, most bridges are limited to forwarding or "switching" frames on a serial basis, from one port to another. This restricts the forwarding rate to the lowest network operating rate. For example, the connection of a 10-Mbps Ethernet network to a 16-Mbps Token Ring network via the use of a local bridge would reduce inter-network communications to a maximum operating rate of 10 Mbps, creating another network bottleneck.

click to expand
Figure 11.5: Bridge Switching Operation

11.2.2 The Layer 2 LAN Switch

Recognizing the limitations associated with the operation of bridges, vendors incorporated parallel switching technology into a device known as an intelligent switching hub. This device was based on matrix switches, which for decades have been successfully employed in telecommunications operations. By adding buffer memory to stored address tables, frames flowing on LANs connected to different ports could be simultaneously read and forwarded via the switch fabric to ports connected to other networks. Because the first series of devices operate based on layer 2 MAC addresses, they are commonly referred to as layer 2 switches.

11.2.2.1 Basic Components

Figure 11.6 illustrates the basic components of a four-port intelligent switch. Similar to a bridge that reads frames flowing on a network to construct a table of source addresses, the tables in an intelligent switch are normally learned by examining traffic flow. This allows the destination address to be compared to a table of destination addresses and associated port numbers . When a match occurs between the destination address of a frame flowing on a network connected to a port and the address in the port's address table, the frame is copied into the switch. Then, the frame is routed through the switch fabric to the destination port, where it is placed onto the network connected to that port. If the destination port is in use due to a previously established cross-connection between ports, the frame is maintained in the buffer until it can be switched to its destination.

click to expand
Figure 11.6: Basic Components of an Intelligent Layer 2 Switch

11.2.2.2 Switch Architecture

There are three common designs used to develop different types of LAN switches, to include layer 2 devices. Those designs that represent the architecture or switching method of the switch include shared bus, shared memory, and crossbar.

11.2.2.2.1 Shared Bus

The shared bus switch architecture represents a simple and cost-effective LAN switch design method. In this design, illustrated in Figure 11.7, frames from each port are buffered while the destination address is read. Frames are then output onto the bus with a special header indicating the destination port. Each frame then flows into an output buffer associated with the destination port and then out through the port. The processor can represent a Reduced Instruction Set (RISC) CPU or may be implemented via hardware. Some switches use the processor to collect and maintain switch statistics as well as process frames. Other switches simply use the processor to examine and forward frames.

click to expand
Figure 11.7: Shared Bus Switch

While a shared bus architecture is relatively simple to implement, it is not easily scalable. That is, at some operating rate, the hub reaches its limit. Most shared bus switches are limited in the number of ports they can support or port modules that can be added to the switch. While this switch architecture is suitable for switching a limited number of 10- and 100-Mbps ports, it is normally not used for Gigabit Ethernet and is commonly relegated to low-bandwidth applications. This is due to the fact that an input can be blocked from sending data if any other transaction is occurring on the bus.

11.2.2.2.2 Shared Memory

A second common switch design uses predefined areas of memory as buffer pools to place incoming frames and extract frames for output to specific ports. This type of switch design is referred to as a shared memory switch.

Figure 11.8 illustrates an example of a shared memory switch architecture. As frames flow into the switch, the destination address is read to determine the appropriate output port. After that information is obtained, the frame is placed into a buffer in a common memory area with the exact location corresponding to the destination port. A flag is set that informs an extraction process to extract the frame and send it to its destination.

click to expand
Figure 11.8: Shared Memory Switch Architecture

Similar to a shared bus architecture, a shared memory design has limited scalability. That is, for the switch to run at full utilization, memory I/O capacity must occur at at least twice the sum of all individual port capacities . Because memory has significantly declined in price, many switch vendors implemented a shared buffer memory design.

A variation of a shared memory design architecture is a multi-port shared memory design. Under this architecture, simultaneous memory access becomes possible for each or for several outputs, resulting in multiple paths between the processor and shared memory shown in Figure 11.8. However, because all data must pass through shared RAM, the maximum bandwidth becomes a function of the shared RAM data bus width and access time, making it both expensive and difficult to implement for switching beyond 20 Gbps.

11.2.2.2.3 Crossbar

A third switch design provides a route from each input port to each output port via the use of a matrix of switching elements. This type of switch design is referred to as a crossbar switch and resembles the matrix shown in Figure 11.6. As the destination address is read, the switch process sets up a cross-connection between input and output ports through the crossbar. Although a crossbar switch is more complex to design, its capacity can be scaled upward through the use of additional switching elements if the basic design permits expandability. Another benefit of a crossbar switch is the fact that it can support an extremely high rate of data transfer, making the design suitable for supporting Gigabit Ethernet beyond a limited number of ports.

11.2.2.3 Delay Times

Switching occurs on a frame-by-frame basis, with the cross-connection torn down after being established for routing one frame. Thus, frames can be interleaved from two or more ports to a common destination port with a minimum of delay. For example, consider a maximum-length Ethernet frame of 1526 bytes, to include a 1500-byte data field and 26 overhead bytes. At a 10-Mbps operating rate, each bit time is 1/10 7 seconds, or 100 ns. For a 1526-byte frame, the minimum delay time if one frame precedes it in attempting to be routed to a common destination becomes:

This computed delay time represents blocking resulting from frames on two service ports having a common destination and should not be confused with another delay time referred to as latency. Latency represents the delay associated with the physical transfer of a frame from one port via the switch to another port and is fixed based upon the architecture of the switch. In comparison, blocking delay depends on the number of frames from different ports attempting to access a common destination port and the method by which the switch is designed to respond to blocking. Some switches simply have large buffers for each port and service ports in a round- robin fashion when frames on two or more ports attempt to access a common destination port. This method of service is not similar to politics as it does not show favoritism; however, it also does not consider the fact that some attached networks may have operating rates different from other attached networks. Other switch designs recognize that port buffers are filled based upon both the number of frames having a destination address of a different network and the operating rate of the network. Such switch designs use a priority service scheme based on the occupancy of the port buffers in the switch.

11.2.2.4 Key Advantages of Use

A key advantage associated with the use of intelligent switching hubs results from their ability to support parallel switching, permitting multiple cross-connections between source and destination to occur simultaneously. Although shared bus and shared memory architectures only permit one frame at a time to be switched, the high operating rate of the switch in comparison to 10BASE-T or 100BASE-T operating rates of connected LAN devices can make it appear that simultaneous cross-connections are occurring although such cross connections occur one at a time. For example, a 100 MHz bus operates ten times faster than the sustained frame rate on a 10BASE-T network. Thus, a shared bus or shared memory switch can first read frames into buffers from two or more ports at 10 Mbps and operate on those frames internally at a much higher rate. Because a crossbar switch can support multiple simultaneous cross-connections between source and destination ports, it can support true parallel switching.

For example, if four 10BASE-T networks are connected to the four-port switch shown in Figure 11.6, two simultaneous cross-connections (each at 10 Mbps) could occur, resulting in an increase in bandwidth to 20 Mbps. Here, each cross-connection represents a dedicated 10-Mbps bandwidth for the duration of a frame. Thus, from a theoretical perspective, an N-port switching hub supporting a 10-Mbps operating rate on each port provides a throughput up to N/2 * 10 Mbps. For example, a 128-port switching hub would support a throughput up to (128/2) * 10 Mbps, or 640 Mbps. In comparison, a network constructed using a series of conventional hubs connected to one another would be limited to an operating rate of 10 Mbps, with each workstation on that network having an average bandwidth of 10 Mbps/128, or 78 Kbps.

11.2.2.5 Considering Connected Devices

One area many network managers and LAN administrators fail to consider when evaluating the bandwidth capacity of switches is the devices they will interconnect. For example, consider a 24-port switch in which each port operates at 100 Mbps. Let us assume that the switch will connect 20 workstations, three servers, and a router, with the latter connected to the Internet. What is the minimum backplane capacity of the switch required to prevent blocking?

Normally, if each port could communicate with another port other than itself, you can expect a maximum of 12 simultaneous cross-connections, each occurring at 100 Mbps. Thus, you might be tempted to require a backplane speed of 1.2 Gbps. However, in a client/server operational environment, your switch has 20 workstations communicating with three servers and a router, resulting in a maximum of four simultaneous cross-connections. Thus, by considering the devices to be connected to the switch, the minimum backplane speed required to prevent blocking is 400 Mbps, and not 1.2 Gbps!

Through the use of intelligent switches you can overcome the operating rate limitation of a LAN. In an Ethernet environment, the cross-connection through a switch represents a dedicated connection so there will never be a collision. This fact has enabled many switch vendors to use the collision wire-pair from conventional Ethernet to support simultaneous transmission in both directions between a connected node and switch port, resulting in a full-duplex transmission capability that will be discussed in more detail later in this chapter. In fact, a similar development permits a Token Ring switch to provide full-duplex transmission because, if there is only one station on a port, there is no need to pass tokens and repeat frames, thus raising the maximum bi-directional throughput between a Token Ring device and a switch port to 32 Mbps. Thus, the ability to support parallel switching as well as initiate dedicated cross-connections on a frame-by-frame basis can be considered the key advantages associated with the use of switches. Both parallel switching and dedicated cross-connections permit higher-bandwidth operations.

Now that we have an appreciation for the general operation of LAN switches, let us focus our attention on the different switching techniques that can be incorporated into this category of communications equipment.

11.2.3 Switching Techniques

Three switching techniques used by layer 2 LAN switches include (1) crosspoint, also referred to as cut-through or " on-the-fly ;" (2) store-and-forward; and (3) a hybrid method that alternates between the first two methods based upon the frame error rate. As we will soon note, each technique has one or more advantages and disadvantages associated with its operation.

11.2.3.1 Cross-Point Switching

The operation of a cross-point switch is based on an examination of the destination of frames as they enter a port on the device. The switch uses the destination address as a decision criterion to obtain a port destination from a lookup table. Once a port destination is obtained, a cross-connection through the switch is initiated, resulting in the frame being routed to a destination port where it is placed onto a network for which its frame destination address resides. In actuality, there are usually two lookup tables in a switch. The first table, which is usually constructed dynamically, consists of source addresses of frames flowing on the network connected to the port. This enables the switch to construct a table of known devices. Then, the first comparison using the destination address in a frame is with the table of known source addresses. If the destination address matches an address in the table of known source addresses, this indicates that the frame's destination is on the current network and no switching operation is required. If the frame's destination address does not match an address in the table of known source addresses, this indicates that the frame is to be routed through the switch onto a different network. Then the switch will search a destination lookup table to obtain a port destination and initiate a cross-connection through the switch, routing the frame to a destination port where it is placed onto a network where a node with the indicated destination address resides.

Some switches use a single lookup table, with the destination address of each frame compared to the addresses in that table to determine whether or not switching is required: other switches use two tables as previously described. Another variation between switch designs concerns the number and location of lookup tables. Some switch designs result in each port having its own lookup table or set of tables, with a fixed amount of memory subdivided into a buffer area and lookup table similar to the buffer and address tables illustrated in Figure 11.6. Another switch design uses a common memory area that is logically subdivided for use by each port. Although this design makes more economical use of memory, the use of shared memory introduces delays that are avoided when memory is used with individual ports. However, from an upgrade perspective, it is easier to upgrade one memory area than a series of memory areas. Thus, you may wish to consider differences in upgradability versus very slight differences in latency.

The remainder of this section focuses attention on the operation of layer 2 LAN switches by assuming only one lookup table is used as it provides an easier mechanism to describe the basic operation of different switching methods. In addition, we will not differentiate performance based on the type and location of lookup tables because the overall switch design, to include the operation of custom designed integrated circuits, has a more pronounced effect on switch performance than the type and location of lookup tables.

Figure 11.9 illustrates the basic operation of cross-point or cut-through switching. In this technique, the destination address in a frame is read prior to the frame being stored (1). That address is forwarded to a lookup table (2) to determine the port destination address that is used by the switching fabric to initiate a cross-connection to the destination port (3). Because this switching method only requires the storage of a small portion of a frame until it is able to read the destination address and perform its table lookup operation to initiate switching to an appropriate output port, latency through the switch is minimized.

click to expand
Figure 11.9: Cross-Point/Cut-Through Switching

Latency functions as a brake on two-way frame exchanges. For example, in a client/server environment, the transmission of a frame by a workstation results in a server response. Thus, the minimum wait time is 2 latency for each client/server exchange, lowering the effective throughput of the switch. Because a cross-point switching technique results in a minimal amount of latency, the effect on throughput of the delay attributable to a switching hub using this switching technique is minimal.

We can compute the minimum amount of latency associated with a cross-point switch as follows . As a minimum, the switch must read 14 bytes (8 bytes for the preamble and 6 bytes for the destination address) prior to being able to initiate a search of its port-destination address table. At 10 Mbps, we obtain:

Here, 9.6 ¼ s represents the Ethernet interframe gap at an operating rate of 10 Mbps, while 100 ns/bit represents the bit duration of a 10-Mbps Ethernet LAN. Thus, the minimum one-way latency not counting switch overhead of a cut-through switch is 20.8 10 ˆ’ 6 seconds, while the round-trip minimum latency would be twice that duration.

11.2.3.2 Store-and-Forward Switching

In comparison to a cut-through operating LAN switch, a store-and-forward switch first stores an entire frame in memory prior to operating on the data fields within the frame. Once the frame is stored, the switch checks the frame's integrity by performing a cyclic redundancy check (CRC) on the contents of the frame, comparing its computed CRC against the CRC contained in the frame's frame check sequence (FCS) field. If the two match, the frame is considered to be error-free and additional processing and switching will occur. Otherwise, the frame is considered to have one or more bits in error and will be discarded.

In addition to CRC checking, the storage of a frame permits filtering against various frame fields to occur. Although a few manufacturers of store-and-forward LAN switches support different types of filtering, the primary advantage advertised by such manufacturers is data integrity. Whether or not this is actually an advantage depends on how you view the additional latency introduced by the storage of a full frame in memory as well as the necessity for error checking. Concerning the latter, switches should operate error-free, so a store-and-forward switch only removes network errors that should be negligible to start with.

When a switch removes an errored frame, the originator will retransmit the frame after a period of time. Because an errored frame arriving at its destination network address is also discarded, many people question the necessity of error checking by a store-and-forward LAN switch. However, filtering capability, if offered , may be far more useful as you could use this capability, for example, to route protocols carried in frames to destination ports far easier than by frame destination address. This is especially true if you have hundreds or thousands of devices connected to a large LAN switch. You might set up two or three filters instead of entering a large number of destination addresses into the switch.

Figure 11.10 illustrates the operation of a store-and-forward layer 2 LAN switch. Note that a common switch design is to use shared buffer memory to store entire frames, which increases the latency associated with this type of switch. Because the minimum length of an Ethernet frame is 72 bytes, then the minimum one-way delay or latency, not counting the switch overhead associated with the lookup table and switching fabric operation, becomes:

click to expand
Figure 11.10: Store-and-Forward Switching

Again, 9.6 ¼ s represents the Ethernet interframe gap, while 100 ns/bit is the bit duration of a 10-Mbps Ethernet LAN. Thus, the minimum one-way latency of a store-and-forward Ethernet switch is 0.0000672 seconds, while a round-trip minimum latency is twice that duration. For a maximum-length Ethernet frame with a data field of 1500 bytes, the frame length becomes 1526 bytes. Thus, the one-way maximum latency becomes:

When considering the use of a Token Ring store-and-forward switch, latency computations are more difficult as the time gap between frames, as noted in Chapter 9, depends on the number of stations on a ring connected to a switch on the port, the cable length of the ring to include twice its sum of lobe cable runs, and the LAN operating rate. If only one station is connected to a port, determining latency is simplified, as a ring is formed with the station and the port. Because the port acts as a participant on the ring, it can respond by passing the frame back to the originator with the delay essentially reduced to twice the latency through the switch. For example, a 2000-byte information field in a Token Ring frame requires a total of 2021 bytes, to include frame overhead. When received from a 16-Mbps Token Ring network, the frame would have a one-way latency of:

11.2.3.3 Hybrid Switching

A hybrid switch supports both cut-through and store-and-forward switching, selecting the switching method based upon monitoring the error rate encountered by reading the CRC at the end of each frame and comparing its value to a computed CRC performed "on-the-fly" on the fields protected by the CRC. Initially, the switch might set each port to a cut-through mode of operation. If too many bad frames are noted as occurring on the port, the switch will automatically set the frame processing mode to store-and-forward, permitting the CRC comparison to be performed prior to the frame being forwarded. This permits frames in error to be discarded without having them pass through the switch. Because the "switch," no pun intended, between cut-through and store-and-forward modes of operation occurs adaptively, another term used to reference the operation of this type of switch is "adaptive."

The major advantages of a hybrid switch are that it provides minimal latency when error rates are low and discards frames by adapting to a store-and-forward switching method so it can discard errored frames when the frame error rate rises. From an economic perspective, the hybrid switch can logically be expected to cost more than a cut-through or store- and-forward switch because its software development effort is more comprehensive. However, due to the competitive market for communications products, upon occasion its price may be reduced below competitive switch technologies.

In addition to being categorized by their switching technique, layer 2 LAN switches can be classified by their support of single or multiple addresses per port. The former method is referred to as port-based switching, while the latter switching method is referred to as segment-based switching.

11.2.3.4 Port-Based Switching

A layer 2 LAN switch that performs port-based switching only supports a single address per port. This restricts switching to one device per port; however, it results in a minimum amount of memory in the switch as well as provides for a relatively fast table lookup when the switch uses a destination address in a frame to obtain the port for initiating a cross-connect.

Figure 11.11 illustrates an example of the use of a port-based switching hub. In this example, M user workstations use the switch to contend for the resources of N servers. If M > N, then a switch connected to Ethernet 10-Mbps LANs can support a maximum throughput of N/2 * 10 Mbps, because up to N/2 simultaneous client/server frame flows can occur through the switch.

click to expand
Figure 11.11: Port-Based Switching

It is important to compare the maximum potential throughput through a switch to its rated backplane speed. If the maximum potential throughput is less than the rated backplane speed, the switch will not cause delays based upon the traffic being routed through the device. For example, consider a 64-port switch that has a backplane speed of 400 Mbps. If the maximum port rate is 10 Mbps, then the maximum throughput, assuming 32 active cross-connections were simultaneously established, becomes 320 Mbps. In this example, the switch has a backplane transfer capability sufficient to handle the worst-case data transfer scenario. Now let us assume that the maximum backplane data transfer capability is 200 Mbps. This would reduce the maximum number of simultaneous cross-connections capable of being serviced to 20 instead of 32 and adversely affect switch performance under certain operational conditions. However, as previously noted, it is also important to consider the types of devices to be connected to a switch. Simply dividing the number of ports by 2 and multiplying by the port data rate correctly provides the needed backplane speed without considering reality. That is, if your organization operates network devices in a client/server environment and you are connecting 20 workstations and four servers on a 24-port switch, then you can expect a maximum of four cross-connects, not 12. This makes a considerable difference when selecting a backplane speed to prevent blockage.

Because a port-based switching hub only has to store one address per port, search times are minimized. When combined with a pass-through or cut-through switching technique, this type of switch results in a minimal latency, to include the overhead of the switch in determining the destination port of a frame.

11.2.3.5 Segment-Based Switching

A segment-based switching technique requires a layer 2 LAN switch to support multiple addresses per port. Through the use of this type of switch, you achieve additional networking flexibility because you can connect other hubs to a single segment-based switch port.

Figure 11.12 illustrates an example of the use of a segment-based switching hub in an Ethernet environment. Two segments in the form of conventional hubs with multiple devices connected to each hub are shown in the lower portion of Figure 11.12. However, note that a segment can consist of a single device, resulting in the connection of one device to a port on a segment switch being similar to a connection on a port-based switch. However, unlike a port-based switch that is limited to supporting one address per port, the segment-based switch can, if necessary, support multiple devices connected to a port. Thus, the two servers connected to the switch at the top of Figure 11.12 could, if desired, be placed on a conventional hub or a high-speed hub, such as a 100BASE-T hub, which in turn would be connected to a single port on a segment-based switch.

click to expand
Figure 11.12: Segment-Based Switching

In Figure 11.12, each conventional hub acts as a repeater, and forwards every frame transmitted on that hub to the switch, regardless of whether or not the frame requires the resources of the switch. The segment switch examines the destination address of each frame against addresses in its lookup table, only forwarding those frames that warrant being forwarded. Otherwise, frames are discarded as they are local to the conventional hub. Through the use of a segment-based switch, you can maintain the use of local servers with respect to existing LAN segments as well as install servers whose access is common to all network segments. The latter is illustrated in Figure 11.12 by the connection of two common servers shown at the top of the switch. If you obtain a store-and-forward segment switch that supports filtering, you could control access to common servers from individual workstations or by workstations on a particular segment. In addition, you can also use the filtering capability of a store-and-forward segment-based switch to control access from workstations located on one segment to workstations or servers located on another segment.

Now that we have an appreciation for the general operation and utilization of layer 2 LAN switches, let us examine how they are used in a networking environment.

11.2.4 Switch Operations

Although features incorporated into an Ethernet switch considerably differ between vendors as well as within vendor product lines, upon occasion we can categorize this communications device by the operating rate of the ports it supports. Doing so results in five basic types of Ethernet switches, which are listed in Table 11.1. Switches that are restricted to operating at a relatively low data rate are commonly used for departmental operations, while switches that support a mixed data rate are commonly used in a tiered network structure at a higher layer in the tier than switches that operate at a low uniform data rate. Concerning the latter, when used in a tiered network structure, the lower uniform operating rate switch is commonly used at the lower level in the tier .

Table 11.1: Types of Ethernet Switches Based on Port Operating Rates
  • All ports operate at 10 Mbps

  • Mixed 10/100 Mbps port operation

  • All ports operate at 100 Mbps

  • Mixed 10/100/1000-Mbps port operation

  • All ports operate at 1000 Mbps

11.2.4.1 Stand-Alone Usage

The basic use of a stand-alone switch is to support a workgroup that requires additional bandwidth beyond that available on a shared bandwidth LAN. Figure 11.13 illustrates the use of a switch to support a workgroup or small organizational department. As a workgroup expands or several workgroups are grouped together to form a department, most organizations will want to consider the use of a twotiered switching network. The first or lower-level tier would represent switches dedicated to supporting a specific workgroup, to include local servers. The upper tier would include one or more switches used to interconnect workgroup switches as well as provide workgroup users with access to departmental servers whose access crosses workgroup boundaries. Because the upper-tier switch or switches are used to interconnect workgroup switches, the upper-tier switches are commonly referred to as backbone switches.

click to expand
Figure 11.13: Support a Small Department or Workgroup

11.2.4.2 Multi-Tier Network Construction

Figure 11.14 illustrates the generic use of a two-tiered Ethernet switch-based network. The switch at the higher tier functions as a backbone connectivity mechanism, which enables access to shared servers, commonly known as global servers, by users across departmental boundaries. Switches in the lower tier facilitate access to servers shared within a specific department. This hierarchical networking structure is commonly used with a higher-speed Ethernet switch such as a Fast Ethernet or Gigabit Ethernet switch, or with other types of backbone switches, such as FDDI and ATM, as well as with other types of lower-tier switches.

click to expand
Figure 11.14: Generic Construction of a Two-Tiered Ethernet Switch-Based Network

One common variation associated with the use of a tiered switch-based network is the placement of both departmental and global servers on an upper-tier switch. This placement allows all servers to be co-located in a common area for ease of access and control, and is commonly referred to as a server farm. However, if an upper-tier switch should fail, access to all servers could be affected, representing a significant disadvantage of this design. A second major disadvantage is the fact that all traffic has to be routed through at least two switches when a server farm is constructed. In comparison, when servers primarily used by departmental employees are connected to a switch serving departmental users, most traffic remains local to the local switch at the bottom of the tier.

With the introduction of Gigabit Ethernet switches, it becomes possible to use this type of switch in either a multi-tier architecture as previously shown in Figure 11.14 or as a star-based backbone. Concerning the latter, Figure 11.15 illustrates the potential use of a Gigabit Ethernet switch that supports a mixture of 100-Mbps and 1-Gbps ports. In this example, the Gigabit Ethernet switch is shown being used to support two fat pipes or trunk groups, with one trunk group consisting of four 100-Mbps ports, while the second group consists of two 100-Mbps ports. Here, the term "fat pipe" represents a group of ports that operate as an entity to provide a higher level of throughput. When we review layer 2 LAN switch features later in this chapter, we also discuss fat pipes in more detail.

click to expand
Figure 11.15: Using a Gigabit Ethernet Switch as a Star-Based Backbone Switch

In examining Figure 11.15, note that enterprise servers are connected to the Gigabit switch, while department servers are connected to 100-Mbps Fast Ethernet hubs. By connecting 10BASE-T switching hubs to Fast Ethernet hubs, you could extend the star into a star-tiered network structure.

11.2.5 Switch Components

Regardless of the operating rate of each port on an Ethernet switch, most devices are designed in a similar manner. That is, most switches consist of a chassis into which a variety of cards are inserted, similar in many respects to the installation of cards into the system expansion slots of personal computers. Modular Ethernet switches that are scalable commonly support CPU, Logic, matrix, and port cards.

11.2.5.1 CPU Card

The CPU card commonly manages the switch, identifies the types of LANs attached to switch ports, and performs self tests and directed switch tests.

11.2.5.2 Logic Module

The logic module is commonly responsible for comparing the destination address of frames read on a port against a table of addresses it is responsible for maintaining. It is also responsible for instructing the matrix module to initiate a crossbar switch once a comparison of addresses results in the selection of a destination port address.

11.2.5.3 Matrix Module

The matrix module of a switch can be considered to represent a crossbar of wires from each port to each port as illustrated in Figure 11.16. Upon receipt of an instruction from a logic module, the matrix module initiates a cross-connection between the source and destination port for the duration of the frame.

click to expand
Figure 11.16: The Key to the Operation of a Switch Is a Matrix Module that Enables Each Port to be Cross-Connected to Other Ports

11.2.5.4 Port Module

The port module can be considered to represent a cluster of physical interfaces to which either individual stations or network segments are connected based upon whether the switch supports single or multiple MAC addresses per port. Some port modules permit a mixture of port cards to be inserted, resulting in, as an example, 10 and 100 Mbps as well as full-duplex connections to be supported. In comparison, other port modules are only capable of supporting one type of LAN connection. In addition, there are significant differences between vendor port modules concerning the number of ports supported. Some modules are limited to supporting two or four ports, while other modules may support six, eight, or ten ports. It should be noted that many switches support other types of LAN port modules, such as Token Ring, FDDI, and even ATM.

11.2.5.5 Redundancy

In addition to the previously mentioned modules, most switches also support single and redundant power supply modules and may also support redundant matrix and logic modules. Figure 11.17 illustrates a typical Ethernet modular switch chassis showing the installation of 11 modules to include five 8-port modules to form a 40-port switch.

click to expand
Figure 11.17: A Typical Ethernet Modular Switch Chassis Containing a Mixture of Port, CPU, Logic, Matrix, and Power Cards

Now that we have an appreciation of the general operation and utilization of layer 2 LAN switches, let us examine those features that define their ability to provide different levels of operational capability. Once this has been accomplished, we turn our attention to Ethernet and Token Ring networking techniques using different types of switches.




Enhancing LAN Performance
Enhancing LAN Performance
ISBN: 0849319420
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Gilbert Held

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net