Enter the Switch

Today, Layer 1 repeaters and hubs and Layer 3 routers are used extensively in networking, but we do not hear much about Layer 2 bridges. The reason is that to a large extent bridges have been replaced . Remember that bridges read the entire frame, check it for accuracy, and then forward or drop it depending on the bridge's address tables. Also remember that this took time, and in highly populated networks a store and forward device like the bridge could actually become a bottleneck. So a couple of engineering types reasoned that to perform a bridging function it was only necessary to read the source and destination address from the beginning of the frame.

Based on this, a decision to drop or forward could be made and address tables could be populated. Well, the cost of high-speed memory and processors had dropped so much that rather than apply this approach to existing bridges and gain an incremental speed advantage, it was decided that the time was right to come out with a whole new class of device.

The new device read the source and destination address of the frame, populated its bridging tables, and either passed or dropped the frame based on this information. Sounds just like a bridge doesn't it? Well, here is where the plot thickens. Instead of connecting just two segments, the new device connected six or more. Forwarding a frame entailed building a virtual circuit between two segments and then once the frame was passed the circuit could be used to connect two other segments. Now if you had six segments you would only need to run three virtual circuits to connect all of the segments. Or, if you only had one virtual circuit, you could switch it between segments at three times LAN speed and accomplish the same thing, namely a non-blocking switch . (See why the high-speed processors and memory were needed?)

The term non-blocking means a device has the capacity to forward frames between all possible segments at the same time. In short, if there is a bottleneck in the network it won't be the switch. Each port connecting to a segment maintained its own bridging table and when forwarding to another segment, the frame was moved to what is often called the switching matrix. The switching matrix established a virtual circuit between the two segments for just enough time for the frame to transit the matrix to the destination segment. The device was called a Layer 2 switch and its meteoric rise in popularity was only exceeded by a plummet in popularity of the bridge.

But was there a downside to the capacity of the Layer 2 switch? At first the answer appeared to be no, but then it was noticed that when a switched network began to reach capacity, its operation became strangely erratic. It turned out the key to the erratic behavior was linked to the very thing that gave the Layer 2 switch its awesome capacity and speed.

The bridge read the entire frame and checked the CRC to ensure the frame was complete and accurate before passing it on. The switch, however, only read the source and destination address from the beginning of the frame. So only the first few bytes of the frame would need to survive a collision to be passed on to another segment. This did not become a problem until the segment approached saturation. As the number of collisions rose, the switch would begin passing a significant number of damaged frames. In essence, the switch was creating a different collision domain every time it set up a virtual circuit and those collision domains were changing so fast it was almost impossible to diagnose.

Most switches were put in environments that would never get close to saturation, and in these situations they performed beautifully. Unfortunately, the erratic performance usually occurred in large complex networks and it became enough of a problem to prompt a redesign of the switch.

The first generation of switches became known as cut-through switches . The second generation switch implemented a full store and forward operation identical to the original bridges and accordingly was called a store and forward switch . However, the store and forward technology came at the expense of speed and price. So you could have a fast and cheap switch that would work fine in some situations or a slow expensive switch that would work in all situations. Seems like a compromise was needed, doesn't it? The compromise, which was developed by Cisco, is called a fragment-free switch. A fragment-free switch reads the first 64 bytes of the frame. Statistically, there is over a 90% chance the remainder of the frame is intact if the first 64 bytes are. Cisco takes the best of all three approaches by offering switches that can be configured to operate in any of the three modes.

graphics/note_icon.gif

Everybody knows a switch is a Layer 2 device. A switch works with MAC addresses so the only thing it could be is a Layer 2 device. However, Cisco has a very popular line of products that apply the fast switching matrix of Layer 2 to the routing of packets at Layer 3. Do you know what Cisco calls those devices? You guessed it, "switches"!

Terms are always evolving at Cisco. The term "switch," in this case, is moving from the generic name of a device to the description of a technology. So be careful of the assumptions you bring with you to the test. Your assumptions are probably not wrong, but they may be very different from the way Cisco sees the world. Enough said?




CCNA Exam Cram[tm] 2 (Exams 640-821, 640-811, 640-801)
CCNA Exam Cram[tm] 2 (Exams 640-821, 640-811, 640-801)
ISBN: 789730197
EAN: N/A
Year: 2005
Pages: 155

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net