Glossary

802.1D
See [Spanning Tree Protocol]
802.1Q

802.1Q is an IEEE trunking mechanism that is an open standard. Both ISL and 802.1Q add VLAN information to the Ethernet frames explicitly. However, the way in which they perform this process is different. With ISL, a 26-byte header and a 4-byte trailer are added to the frame: The original frame is not modified. This process is referred to as encapsulation. With 802.1Q, the actual frame is modified, or tagged. To denote VLAN information, a 4-byte Tag Protocol Identifier (TPID) and a 2-byte Tag Control Information (TCI) are inserted between existing fields in the Ethernet frame.



802.1Q Tunneling

Q-in-Q tunneling, proprietary to Cisco, is commonly referred to as tag stacking. When you send tagged VLAN traffic into a service provider's network, the service provider's switches add their own VLAN tag to isolate your traffic from other customers' traffic. This is accomplished by inserting another 802.1Q tag (the service provider's) into your 802.1Q tagged frame. Actually, all of your traffic can be tagged, including BPDUs and CDP frames, making the service provider's network appear completely transparent.



802.1W
See [Rapid STP]
802.1X

IEEE's 802.1x standard defines how to authenticate and control port access. A switch's port state (with 802.1x enabled) is initially in an unauthorized state. The switch allows only Extensible Authentication over LAN (EAPOL) traffic through the port until the user has been authenticated. 802.1x uses EAPOL to perform authentication. When the user is authenticated, all of his traffic is permitted. If the user doesn't support the 802.1x protocol, the port will remain in an unauthorized state.



Access Layer

The access layer is one of three layers of Cisco's hierarchical design model. The access layer provides the user entry point into the switched network. It allows for the connection of different users and their servers. At this layer, you can provide shared or switched access.



Access Link

An access link is a connection that belongs to a single VLAN and is completely transparent to the users. They have no knowledge of the existence of the VLAN. However, to maintain VLAN information, the originating frame from a user must contain VLAN information that the switch fabric can use to forward the frame.



Active RP

In HSRP, the role of the active and standby RPs is based on the priority of the RPs in the HSRP group. The RP with the highest priority is elected as the active RP and the one with the second highest is elected as standby RP. If the priorities are the same, the IP address of the RP is used as a tiebreaker. In this situation, the RP with the higher IP address is elected for the role. The active RP is responsible for forwarding all traffic destined to the virtual RP's MAC address. A second RP is elected as a standby RP. The standby RP keeps tabs on the active RP by looking for HSRP multicast messages, called HSRP hellos.



Alternate Port

This RSTP port serves as a secondary root port in case the primary root port fails it is in a discarding port state unless a failure of the root port or connection occurs, in which case it is moved to a forwarding state.



Application Specific Integrated Circuit (ASIC)

ASICs are specialized processors that perform only one or a few functions very fast. One limitation of ASICs is that they aren't plug-and-play you can't use just any ASIC for a certain task. However, because ASICs perform only a small number of tasks, their cost is much less than a processor and their speed is much faster. As an example, if you were to use a processor to switch frames between interfaces, you would get forwarding rates in the high thousands or low millions of packets per second (pps). But with a specially designed ASIC, you could get forwarding rates in the tens or hundreds of millions of pps.



Architecture for Voice, Video, and Integrated Data (AVVID)

AVVID is a process that Cisco developed to help design complex networks with multiple coexisting technologies. Cisco created this architecture to simplify the planning, designing, and implementing of networks for companies. AVVID has three main components: network infrastructure, intelligent network services, and network solutions.



Authentication, Authorization, and Accounting (AAA)

AAA centralizes authentication, authorization, and accounting functions. Authentication provides a means for identifying an individual and validating her access to a device. Authorization verifies what specific tasks a user can perform on a device. Accounting keeps a record of what a user did on a device.



BackboneFast

BackboneFast is a Cisco-proprietary enhancement to STP that provides scalability to STP on your backbone switches (core and distribution layer). BackboneFast and UplinkFast are complementary STP enhancements. One major difference between UplinkFast and BackboneFast is that UplinkFast works only for directly connected links that fail, whereas BackboneFast has the capability to detect indirect link failures that is, links not physically associated with a switch.



Backup Port

This RSTP port serves as a secondary designated port in case the primary designated port fails. It is in a discarding port state unless a failure of the designated port occurs, in which case it is moved to a forwarding state.



Blocking Port

In STP, a blocking port listens only for BPDUs from other switches; it does not forward any user frames. A port enters this state when it doesn't detect a BPDU within the maximum age timer interval.



Bridge Identifier

Each bridge has a unique identifier that it uses when it multicasts its BPDUs. The identifier is made up of a bridge (switch) priority and one of the switch's MAC addresses.



Bridge Protocol Data Unit (BPDU)

Switches periodically send out a special multicast packet, called a BPDU, that helps them to advertise themselves, their configurations, and any changes that have occurred. BPDUs help switches discover the topology of the network, including loops. If the cost of a link changes, a new switch or segment is added to the network, or an existing switch or segment fails, this information is propagated via BPDUs and will cause the switches to run the STP algorithm. This is done to remove any existing loops that those changes might have created or to ensure that there is still one active path between any two destinations.



BPDU Guard

BPDU Guard is a Cisco feature that will shut down a PortFast port if a BPDU is received on it. After the port is shut down, the status of the interface is error disabled. BPDU Guard is disabled by default.



BPDU Skewing

BPDU skewing refers to the time difference between when a switch expects to receive BPDUs and when they are actually received. BPDU skewing can occur in any of the following situations: STP topology changes occur, one of STP's timers expires, or a BPDU is not received within an expected time interval. When any of these three occurrences happen, switches flood the network with BPDUs to ensure that the most up-to-date information is contained in the STP topology table.



Broadcast

When a broadcast packet is generated, everyone in the broadcast domain sees this packet and processes it. However, there's no guarantee that any or all destinations will receive the broadcast.



Centralized Switching

In a centralized switching architecture, all switching decisions are handled by a central, single forwarding table. A centralized switching device can contain both Layer 2 and Layer 3 functionality. In other words, this table can contain both Layer 2 and Layer 3 addressing and protocol information as well as access control list (ACL) and quality of service (QoS) information.



Cisco Express Forwarding (CEF)
See [Topology-Based Switching]
Cisco Group Management Protocol (CGMP)

CGMP, a Cisco-proprietary multicasting protocol, is a dynamic process that updates the switch's address table with multicast addresses as with snooping but without the performance penalty of snooping. CGMP allows Cisco's switches to learn from Cisco's IGMP-enabled RPs about the list of end stations participating in the different multicast groups. Switches take this address information and appropriately update their CAM tables. This solution has very little overhead only a minimal amount of management traffic is relayed between the RP and the switch.



Class-Based Weighed Fair Queuing (CB-WFQ)

CB-WFQ is an extension of WFQ. With WFQ, the IOS automatically determines what goes into the higher and lower queue structures; you have no control over the process. With CB-WFQ, you can configure up to 64 classes and control which traffic is placed in which class. Within a class, you can restrict it to a certain amount of bandwidth on the egress interface. CB-WFQ gives you much more prioritization control on queuing on the egress interface, but requires configuration on your part. The one nice feature of WFQ is that it doesn't require any configuration on E1 or slower WAN link connections because it is already enabled and the IOS automatically performs the prioritization for you.



Coarse Wave Division Multiplexing (CWDM)

CWDM is a last-mile MAN technology and supports up to eight wavelength frequencies. It is used for short distances, such as customers located in the same building.



Common Spanning Tree (CST)

With CST, only one instance of STP runs for all the VLANs. STP runs in the default management VLAN, which is typically VLAN 1. Because only one instance of STP exists, one root switch is elected and all loops are removed.



Content Addressable Memory (CAM)

CAM is a special type of high-speed memory used in transparent bridges to store source MAC address and port identifier information. The term is still used today even though switches use some form of dynamic RAM.



Core Layer

The core layer is one of three layers of Cisco's hierarchical design model. The function of the core layer is to offer an extremely high-speed Layer 2 switching backbone between different distribution layers to provide packet switching that is as fast as possible.



Custom Queuing (CQ)

CQ has 16 queues. The same classification techniques used in priority queuing (PQ) is used to place packets into one of the 16 queues. The main difference between PQ and CQ is that priority queuing guarantees only that the high queue will be processed, whereas CQ guarantees that every queue will be processed. Queues are processed in a round-robin fashion. To give preference to one queue over another, you specify the amount of traffic that is allowed to be processed from a queue.



Dense Mode (DM)

DM multicast routing protocols assume that there are many multicast end stations spread across most of your segments in your campus network and that your network infrastructure has a lot of available bandwidth. This means that most if not all of your RPs must be forwarding multicast traffic from the multicast servers to the multicast end stations. DM protocols initially flood the network with multicast traffic and then, based on the discovery of participating end stations, prune back the distribution tree to include only those segments with participating end stations. The RPs use IGMP to discover the end stations.



Dense Wave Division Multiplexing (DWDM)

DWDM supports multiple wavelength frequencies on a single strand of fiber (up to 200). It supports very high data ranges (Gbps). One advantage it has over SONET is that SONET uses TDM, which wastes bandwidth.



Designated Port

After the root ports for each bridge have been determined by running the STP algorithm, designated bridges and designated ports are resolved. Each LAN segment has a designated switch, which has the lowest accumulated path cost to the root switch. All frames that are forwarded to that particular segment go through the designated switch via its designated port, and no other ports. If two or more switches have the same path cost to the root switch for a given segment, the bridge with the lower switch identifier will be chosen as the designated switch. Through the process of elimination, eventually only one switch will remain that has a designated port for each LAN segment.



Designated Switch
See [Designated Port]
DiffServ

DiffServ uses a multiple-service model to implement QoS. With DiffServ, applications do not signal their QoS requirements before sending their data. Instead, DiffServ is implemented within your network infrastructure and groups related traffic types together, marking them with classification information. This provides an advantage over IntServer because you don't need to modify any end stations.



Directed VLAN Services (DVS)

With DVS, edge switches connect to the MAN carrier via a trunk link. From the edge switches' perspective, they know that they are connecting to a service provider switch and are setting up a trunk connection to the carrier's switch, typically with 802.1Q. Connections by the carrier can be set up as either point-to-point or multipoint.



Distributed Switching

In a distributed switching architecture, switching decisions are decentralized. As a simple example, a 6500 switch has each port (or module) make its own switching decision for inbound frames while a main processor or ASIC handles routing functions and ensures that each port has the most up-to-date switching table. One advantage of the distributed implementation approach is that by having each port or module make its own switching decision, you're placing less of a burden on your main CPU or forwarding ASIC because you're distributing the processing across multiple ASICs. In this case, a separate forwarding engine (ASIC) is used for each port and each port has its own small switching table. With this approach, you can achieve much greater speeds than a switch that uses central forwarding for switching rates of more than 100Mpps.



Distribution Layer

The distribution layer is one of three layers of Cisco's hierarchical design model. The distribution layer is the demarcation point between the core and the access layers of a campus network. The distribution layer switches should perform all Layer 3 and policy functions. These include the following tasks: connecting to access switches to provide workgroup and department access; implementing VLANs to handle broadcast issues; routing between VLANs; designing addressing and address summarization; enforcing security policies; translating between different media types such as FDDI, Ethernet, and token ring.



Distribution Tree

To forward multicast traffic intelligently, RPs must be able to build a distribution tree. A distribution tree is somewhat similar to the spanning tree used by switches to remove Layer 2 loops. Using a distribution tree, RPs can ensure that a multicast frame traverses a segment only once in the network. This minimizes the bandwidth impact, which is accomplished by making sure that there's one and only one path from the source of the multicast traffic to each of the end stations that wants to see it.



Dynamic Trunk Protocol (DTP)

DTP is a Cisco-proprietary protocol that automatically negotiates whether trunking can be performed on a connection. DTP supports automatic negotiation of both ISL and 802.1Q on trunk-capable links.



Dynamic VLANs

Dynamic VLANs require you to assign a user to a VLAN once, and switches dynamically use this information to configure the port on the switch automatically. Dynamic VLANs can be based on the following items: the MAC addresses of workstations, the Layer 3 addresses (such as IP addresses), the protocol type (such as IP or IPX), or directory information stored in Novell's NDS or Microsoft's Active Directory.



Enterprise Campus

The Enterprise Campus provides the three-layer hierarchical campus model, but doesn't include remote or Internet connections (these are in the Enterprise Edge). Within the Enterprise Campus model, you'll find the following sub-modules: Campus Infrastructure, Network Management, Server Farms, and Edge Distribution.



Enterprise Edge

The Enterprise Edge controls traffic between the Service Provider Edge and the Enterprise Campus. The Enterprise Edge contains four sub-modules: E-commerce, Internet Connectivity, Remote Access and VPNs, and WAN Access.



Enterprise Model

One of the limitations of the three-layer hierarchical model is that it covers only a single campus design. Cisco has expanded on this and created the Enterprise Composite Network Model (ECNM), which breaks up a network into three functional areas: Enterprise Campus, Enterprise Edge, and Service Provider Edge. The main purpose of the ECNM is to define clear boundaries, or demarcation points, between different modules, or areas, of your network.



EtherChannel

EtherChannels are technology that allows you up to 8 Fast Ethernet or Gigabit Ethernet connections that provide up to 1,600Mbps or 16Gbps of bandwidth in full-duplex mode. The channel is treated as one logical connection between two switches. Even if one of the connections fails in the EtherChannel, the other connection(s) still operate properly.



Ethernet over MPLS (EoMPLS)

EoMPLS extends MPLS by tunneling Layer 2 Ethernet frames across a service provider's Layer 3 core. EoMPLS has more scalability than Q-in-Q because it has a Layer 3 core and Layer 2 information, including STP, can be tunneled through the service provider.



First-In First-Out (FIFO) Queuing

FIFO queuing doesn't provide any type of QoS the first packet or frame received is the first one queued. Traffic is not associated with any class; instead, priority is defined by when the packet comes into an interface. The default queuing method on Cisco Catalyst switches is FIFO queuing, which performs queuing in hardware.



Forwarding Port

After finally completing the learning state in STP, a port is placed into a forwarding state in which the bridge performs its normal functioning. It learns source MAC addresses and updates the switch's CAM table as well as forward user frames through the switch itself.



Gateway Load Balancing Protocol (GLBP)

GLBP is a Cisco-proprietary protocol, like HSRP. One of the limitations of HSRP and VRRP is that only one router in the HSRP group is active and can forward traffic for the group the rest of the routers sit idle. GLBP allows the dynamic assignment of a group of virtual addresses to end stations. With GLBP, up to four RPs in the group can participate in the forwarding of traffic. In addition, if a GLBP RP fails, fault detection occurs automatically and another GLBP RP will pick up the forwarding of packets for the failed RP.



Hot Standby Routing Protocol (HSRP)

HSRP is a Cisco-proprietary protocol that provides Layer 3 redundancy to overcome the issues of IRDP, Proxy ARP, end station routing protocols, and a single definition of a default gateway on the end station. Unlike the four previous solutions, HSRP is completely transparent to the end stations you do not have to perform any additional configuration on the end stations themselves. HSRP allows Cisco RPs to monitor each other's status, providing a very quick failover when a primary default gateway fails: by establishing HSRP groups.



ICMP Redirect Protocol (IRDP)

IRDP extends ICMP, allowing an end station to dynamically learn the default gateways that exist in the VLAN. RPs announce themselves every 5 10 minutes and end stations hold this information for up to 30 minutes. The main problem with IRDP is that if the primary RP fails, it might take up to 30 minutes before using a different RP.



Internal STP (IST)

IST is an internal STP process running on an MST switch. IST is used to handle interaction between MST and CST switches. Because 802.1Q is an IEEE standard, MST must be backward compatible with switches that support only CST. IST is used to implement this functionality and interact with CST switches. IST essentially treats the entire MST region as a virtual bridge when interacting with CST switches.



Internet Group Management Protocol (IGMP)

IGMP provides a standardized and dynamic client registration process in which clients advertise the multicast applications they want to participate in to their connected RPs. You find two basic components in all three versions of IGMP: multicast hosts and multicast queriers. Those two components share two different types of messages: Query messages are used by the RP to discover the end stations on a segment that are participating in a multicast group. Report messages are used by end stations in response to the RP's query message to notify the RP of its participation in a multicast group. The relationship between multicast querier and host is a loose one. Hosts come and go as they please, based on the user starting or stopping a multicast application.



IGMP Snooping

In IGMP snooping, the switch dynamically keeps track of the joining and leaving by members of a multicast group. The switch does this by snooping the IGMP queries that RPs generate and the reports that multicast end stations reply with. The problem with this approach is that the switch must examine every multicast frame, which is very process intensive and introduces a lot of latency in the switching of everyone's frames, including the multicast traffic.



InterSwitch Link (ISL)

ISL is a Cisco-proprietary technology for trunking VLANs at Layer 2. Unlike normal Ethernet NICs, ISL cards cost more because specialized ASICs and processors are included to support the framing encapsulation at gigabit speeds. ISL adds a 26-byte header and a 4-byte trailer (which is a CRC to the original Ethernet frame) for a total of 30 bytes.



IntServ

IntServ is defined in RFC 1633 and provides a guarantee for QoS for an application connection. This is different from DiffServ, which does this based on traffic classifications, not specific connections. IntServ is implemented using RSVP on all devices handling the connection, including the source and destination. RSVP uses signaling to set up the connection and to maintain QoS. When a new connection is being established, RSVP needs to determine what paths and devices are used to support the connection. The Common Open Policy Service (COPS) is used to centralize the setup and maintenance of the connection.



Layer 3 Switch

A Layer 3 switch is an enhanced router. One problem of traditional routers is that a generic processor performs most of the switching decisions. Using a generic processor allows the router to perform all tasks, but it doesn't perform all of them well. To overcome this inefficiency, Layer 3 switches use inexpensive ASICs to perform forwarding of frames. This allows Layer 3 switches to achieve very high forwarding rates and, in tandem with a generic process, still allows the Layer 3 switch to offer many of the other features of a traditional router.



Learning Port

Upon the completion of the listening state in STP, a port moves into a learning state. In this state, a port examines user frames for source MAC addresses and places them in the switch's CAM table. Still, no user frames are forwarded through the switch.



Link Aggregation Control Protocol (LACP)

LACP is IEEE's version of dynamically forming trunks. LACP is defined in 802.3ad and is similar to Cisco's PAgP. Like PAgP, LACP is used to interact with a remote switch to determine whether they have multiple connections between them that can be bound together into a single EtherChannel.



Listening Port

Passing from a blocking state in STP, a port enters into a listening state. In this state, a port listens for frames to detect available paths to the root switch, but does not take any source MAC addresses of end stations and place them in the CAM table. Likewise, the switch does not forward any user frames.



Loop Guard

The Loop Guard feature is similar to UDLD. Loop Guard is used to detect the loops typically caused by unidirectional connections. Loop Guard performs an additional check compared to UDLD: If BPDUs are no longer being received on a nondesignated port, instead of moving a port through the listening, learning, and forwarding states, Loop Guard instead places the port in an blocked state, marking it as inconsistent. One nice feature of Loop Guard, as compared to UDLD, is that when the problem is fixed, Loop Guard has the ports transition back to the correct states.



Low Latency Queuing (LLQ)

LLQ uses two forms of queuing: PQ and CB-WFQ. The first thing that LLQ checks is whether the classification of the egress traffic is high. You can also reserve either a percentage of bandwidth or a block of bandwidth for the high-priority queue. If the traffic is high priority, it is processed first. Otherwise, CB-WFQ is used to process traffic.



Modular QoS CLI (MQC)

MQC is the term that Cisco uses to define the implementation of QoS on an IOS device. MQC is used to create your QoS traffic policies and then to associate these policies to the device's interface(s). Each traffic policy you create has two components: a traffic class that classifies (or groups) traffic and a traffic policy that defines how the traffic should be processed.



Multicast

When a multicast frame is generated, everyone in the broadcast domain sees the packet, but only a group of machines those running that multicast application process it. Multicasting is the transmission of a packet to a host group, which can contain from zero to many end stations. Like a broadcast, a multicast is sent with a best effort reliability there's no guarantee that all the machines will see it.



Multilayer Switch

Multilayer switching combines Layer 2, Layer 3, and Layer 4 switching, all in one chassis. These switches can examine information in the transport layer segment (TCP and UDP) to help make intelligent switching decisions. To do this, a multilayer switch routes the first packet in a packet stream but switches the rest, sometimes referred to as route once, switch many.



Multiple STP (MST)

MST is an enhancement to IEEE's RSTP. MST is similar to Cisco's PVST. The main purpose of MST is to allow multiple instances of STP, but to reduce the amount of overhead associated with Cisco's PVST. Instead of having a separate instance of STP for each VLAN, MST uses a concept of an MST instance, in which multiple VLANs can be associated with an instance.



Native VLAN

802.1Q trunks support a native VLAN. A native VLAN is a VLAN that does not tag frames. This is different from ISL, in which all VLANs that traverse the trunk carry VLAN information. One advantage that native VLANs provide is that you can have both 802.1Q and non-802.1Q devices on the same trunk connection.



NetFlow Switching

NetFlow switching is a Cisco-proprietary form of route caching. With NetFlow switching, the RP and ASICs work hand-in-hand. The first packet is handled by the main processor or ASIC. If the destination MAC address matches the RP's address (the Layer 3 address doesn't have to match), the processor will program its interface ASICs to process further traffic for this connection at wire speeds. The main processor will update the interface's cache with the appropriate connection information: the source and destination MAC addresses, IP addresses, and IP protocol information. This is done for each direction of a connection; in other words, the table is unidirectional. The interface ASIC will use this information to forward traffic without having to interrupt the CPU.



Network Analysis Module (NAM)

Instead of using an external network analyzer or RMON probe to analyze or gather your traffic, the Catalyst 6000 Series switches support a Network Analysis Module (NAM). A NAM is similar to an RMON probe. You can use it to gather RMON (RFC 1757) and RMON2 (RFC 2021) information. The NAM itself cannot perform analysis on the captured data. However, you can use either Cisco's TrafficDirector product or any IETF-based RMON-gathering product.



Path Costs

Each port has an associated cost, which is usually the inverse of the actual bandwidth of the port. When you're choosing ports to place into forwarding mode in STP, lower accumulated port costs of the paths to the root switch are preferred.



Per-VLAN STP (PVST)

To solve the scalability and convergence problems of CST, Cisco's PVST uses a separate instance of STP per VLAN. This means that for each VLAN, you have a root, port costs, path costs, and priorities and all these can be different per VLAN. To ensure unique bridge IDs for each VLAN, Cisco switches have a pool of MAC addresses to choose from.



PVST+

PVST+ is a Cisco extension to its PVST protocol. PVST+ allows the incorporation of both IEEE's 802.1Q CST and Cisco's PVST in a switched network. One nice feature of PVST+ is that you do not have to configure anything on your switches to use it it works automatically. It detects CST and PVST and makes the appropriate changes or adjustments.



Priority Queuing (PQ)

PQ has four queues, where each queue has a distinct priority: high, medium, normal, and low. Strict priority is enforced in this scheme. First, the high queue is emptied. After the high queue has been emptied, the IOS checks to make sure that no new packets have been added to it. If so, the high queue is processed again. Only when the IOS checks the high queue and finds it empty is the medium queue processed. Both the high and medium queues must be empty for the normal queue to be processed and the high, medium, and low queues must be emptied before the low queue is processed.



Private VLANs (PVLANs)

PVLANs provide Layer 2 isolation between devices within the same private VLAN.



Protocol Independent Multicast (PIM)

PIM is a multicast routing protocol that's currently being defined by a draft RFC. The Internet Engineering Task Force (IETF) is discussing PIM's ongoing development. PIM is unique in that it supports both dense and sparse modes, making it much more flexible than other multicast routing protocols. PIM uses IGMP to transport its routing information.



Port Aggregation Protocol (PAgP)

PAgP, a Cisco-proprietary protocol, allows the dynamic creation of EtherChannels between switches without your intervention. Using this Cisco protocol, switches send special frames out of ports capable of forming EtherChannels to discover whether neighboring switches support this feature. If so, a channel is formed between the ports if the necessary configuration conditions have been met.



PortFast

PortFast, a Cisco-proprietary STP enhancement, reduces the size of the STP database by excluding ports that do not have bridges or switches connected to them and removing them from the STP topology, thereby minimizing downtime when changes occur in a switched network. PortFast should only be used to connect to nonbridge and nonswitch devices, like a PC, router, or file server; otherwise, you might inadvertently create Layer 2 loops.



Proxy ARP

Proxy ARP is used when an end station ARPs for a destination device's MAC address that is on a different subnet. A Cisco RP can respond back to the end station with its own MAC address, making it appear that the destination is on the same segment. Proxy ARP is enabled, by default, on Cisco RPs. The main disadvantage is that if the RP fails, the end station won't discover this unless it reboots or re-ARPs.



Q-in-Q Tunneling
See [802.1Q Tunneling]
Random Early Detection (RED)

RED is a mechanism that handles congestion slightly better than tail dropping. With RED, a threshold is assigned to the queue. When this threshold is reached, traffic being placed into the queue is randomly dropped: Some traffic is allowed to enter the queue, but other traffic is dropped. RED, therefore, tries to deal with congestion before the queue is filled up and everything has to be dropped. However, RED has one main problem: It doesn't look at the class of traffic (CoS or IP Precedence) when dropping traffic it just randomly drops certain packets.



Rapid STP (RSTP)

Because of convergence issues in the 802.1D STP algorithm, IEEE developed 802.1W. 802.1W, also called RSTP, includes enhancements to speed up the convergence with STP. One of the main problems of using Cisco's STP enhancements PortFast, UplinkFast, and BackboneFast is that they are proprietary and function only on Cisco switches. In most instances, you can use RSTP instead of Cisco's proprietary STP enhancements and get the same or better performance from your STP process.



Real-Time Transport Protocol Priority Queuing (RTP-PQ)

RTP, an IP protocol, is used to provide transport services for voice and video information. Cisco supports a queuing method called RTP-PQ, which provides a strict prioritization scheme for delay-sensitive traffic. Delay-sensitive traffic is given higher prioritization and is processed before other queues. This queuing scheme is normally used for WAN connections. With RTP-PQ, there are four queues, just as in PQ. The highest priority queue, voice, is always processed first. Voice is the first queue. The IOS looks at the UDP port numbers to determine whether traffic should be placed in this queue. Data is typically placed in the other three queues. These queues use either the CB-WFQ or WFQ method to process and dispatch packets from the queue.



Remote SPAN (RSPAN)

RSPAN is an extension of local SPAN. With local SPAN, all the source and destination ports are on the same switch. With RSPAN, these ports can be on different switches. This is very handy if you have only a limited number of network analyzers or RMON probes, but still want to see certain traffic across all your switches in an area. RSPAN enables you to capture traffic on one switch, but redirects it to a port on another switch.



Root Guard

Root Guard is a Cisco feature that you can use to force a particular port to be a designated port to ensure that switches connected to it do not become a root switch. Root Guard enables you to create an STP topology in which you explicitly control which switch becomes and stays the root switch (barring any failures).



Root Port

After the root switch is elected, each switch determines which port, called the root port, it uses to reach the root switch. The root port is a port on a switch that has the lowest accumulated cost to the root switch.



Route Caching

In route caching, the first time a destination is seen by the router, the CPU processes the packet and forwards the packet to the destination. During that process, the router places the routing information for this destination in a high-speed cache. The second time that the router needs to forward traffic to the destination, it consults its high-speed cache before using the CPU to process the packet.



Route Processor (RP)

An RP is a Layer 3 device that can switch information either between logical subnets (VLANs) or physical subnets (as in the traditional router). If the RP is performing a traditional routing role, it could be switching packets between different LAN media types, such as fiber distributed data interface (FDDI), Ethernet, and token ring. For WAN connections, it provides access to ISDN, frame relay, ATM, and dedicated circuit networks.



Route Processor Redundancy (RPR)

Starting with IOS 12.1(13)E and later, the Catalyst 6500 supports SE redundancy with both Route Processor Redundancy and Route Processor Redundancy Plus (RPR+). These two features allow hardware redundancy for the Multilayer Switch Feature Card (MSFC) and Policy Feature Card (PFC or PFC2). Basically, this provides Layer 3 redundancy for the Catalyst 6500. RPR provides a Supervisor Engine (SE) redundancy for route processing (routing). One SE is primary and the other is secondary.



Router-on-a-Stick

A router-on-a-stick is a trunk connection between an external router and a switch. The trunk is terminated on the router on a trunk-capable interface and the router uses this single interface to route between VLANs.



Server Load Balancing (SLB)

SLB provides a simple form of load balancing for critical services in your network. In SLB, you have two types of servers: virtual and real. The virtual server is the server that end stations send their TCP/IP requests to. The IOS SLB software then redirects that request to a real server in your network. Because most clients use DNS to resolve DNS names to IP addresses, make sure that your DNS server contains the virtual IP address used by SLB.



Service Provider Edge

The Service Provider Edge provides WAN and MAN connections to private and public networks for customers and is connected to a company's Enterprise Edge. There are three submodules in the Service Provider Edge: ISP, PSTN, and WAN technologies.



Shared Distribution Tree

With a shared tree, only one copy of each multicast frame is forwarded to those segments that have participating multicast end stations. A shared distribution tree contains a rendezvous point that's the central point of the tree for all multicast traffic. All traffic from every multicast application in your network is first forwarded to the rendezvous point. From there, the multicast traffic uses a single-tree structure for the dissemination of the traffic, creating less overhead on the RP. The downside is that for certain multicast streams, suboptimal paths can exist. This tree structure is very similar to common STP: For the entire switched network, there's only one tree structure, with the rendezvous point functioning as the root of the tree.



Single Router Mode (SRM) Redundancy

SRM provides an alternative type of redundancy in which dual MSFC cards are installed on dual SEs and both MSFC cards are in the active state and processing traffic. One of the problems of using two active MSFC cards in the same chassis is that you have to configure them separately unless you're using RPR or RPR+. SRM is different from RPR and RPR+. SRM provides Layer 3 redundancy while RPR and RPR+ provide card-level redundancy.



Sparse Mode (SM)

SM protocols use join messages to construct a distribution tree, ensuring that only those segments with participating end stations have traffic forwarded to them by their connected RPs. SM protocols therefore scale much better and are more suited for large, geographically dispersed environments. Unlike DM protocols, SM protocols do not waste bandwidth by flooding multicasts everywhere. Traffic is not forwarded to a segment until an end station joins a multicast group. SM assumes that only a handful of RPs are forwarding multicast traffic. It also assumes that the participating end stations are widely dispersed across your campus network (possibly located across your WAN), and that the amount of bandwidth in your network is limited. In this approach, the distribution tree is empty and, as end stations are discovered, branches are added to the tree.



Source-Based Distribution Tree

Source-based distribution trees guarantee that multicast traffic traverses a given segment only once. However, unlike shared trees, where there's a single tree for the whole network, source-based implementations have a separate distribution tree for each multicast group. There's a separate tree for each multicast group (address), allowing for optimal delivery of multicast streams, but this creates more overhead on the RP. This process is more similar to Cisco's PVST in that there's one instance of STP per VLAN. In this case, there's one source-based tree per multicast group.



Spanning Tree Protocol (STP)

STP is a self-configuring Layer 2 algorithm that's responsible for removing loops in a switched network while still providing path redundancy. Because a switch automatically forwards broadcasts and multicasts, STP is necessary to make sure that this traffic is not continuously forwarded throughout a switched network. Another problem with loops is that with the switch's learning function, it might mistakenly update its address table with incorrect information concerning an end station as a frame traverses a loop. STP was developed by DEC and later incorporated into IEEE's standards as 802.1D. However, the two protocols are not compatible. In a bridged or switched network, all Layer 2 devices must run the same STP algorithm.



Standby RP
See [Active RP]
Static VLAN

Cisco's initial implementation of VLANs is based on the port that a user was assigned to. This is sometimes referred to as port-based membership. Using this initial implementation, you configure every port on a switch to reflect the appropriate VLAN for the users. This could easily be done either via a command-line interface or an SNMP-based product using a graphical interface.



Switch Fabric Module (SFM)

The Catalyst 6500 switches support a special card called a Switch Fabric Module (SFM), which comes in two versions: 1 and 2. In combination with the Supervisor Engine II, the backplane capacity of the 6500 is upgraded from 32 Gbps to 256 Gbps. The SFM delivers 30Mpps throughput using Cisco Express Forwarding (CEF) and 210Mpps throughput with Distributed Feature Card (DCF) installed.



Switched Port Analyzer (SPAN)

SPAN enables you to mirror traffic from one or more interfaces on a switch to a port that is connected to a network analyzer, packet sniffer, or remote monitoring (RMON) probe. This traffic can then be analyzed and processed for reporting.



Synchronous Optical Network (SONET)

SONET, which uses fiber-optic cabling, can carry multiple transports, including Ethernet, IP, ATM, and other services. It supports a dual-ring topology for redundancy. Its main disadvantage is that it uses bandwidth inefficiently.



Tail Dropping

Tail dropping is one of the most common forms of dealing with congestion during egress queuing. When queuing packets during a period of heavy congestion, the queue will at some point fill up, leaving no room for more packets. During this period, any newly arrived packets for the egress queue are dropped. With tail dropping, all traffic is treated equally. In other words, the IOS doesn't look at whether this is UDP or TCP traffic, or data or voice. This can be detrimental for TCP-based connections because dropping one packet from a connection can cause the retransmission of multiple packets. In a network that heavily utilizes TCP, using tail dropping could actually create more congestion than it reduces.



Topology-Based Switching

Topology-based switching uses a forward information base (FIB) to assist in Layer 3 switching. This type of switching pre-populates the cache by using the information in the RP's routing table. If there is a topology change and the routing table is updated, the RP will mirror the change in the FIB. Basically, the FIB contains a list of routes with next-hop addresses to reach those routes. Cisco has developed a proprietary topology-based switching FIB called Cisco Express Forwarding (CEF). CEF also includes a second table, called an adjacency table. This table contains a list of networking devices directly adjacent (within one hop) to the RP. CEF uses this table to prepend Layer 2 addressing information when rewriting Ethernet frames during MLS.



Transparent Bridging

A transparent bridge is used to connect similar media types together to solve bandwidth and collision problems, but to still maintain the same broadcast domain. The term transparent bridge is used because the bridge is completely transparent to the end stations that it is interconnecting. Frames that pass through a transparent bridge are not modified: What comes in on an interface leaves exactly the same way on another interface. Transparent bridges perform three basic functions: They make forwarding and filtering decisions based on the destination MAC address in a frame, they learn where end stations reside in the network, and they remove loops.



Transparent LAN Services (TLS)

With a TLS, the connection between switches on the MAN is done transparently by the service provider. In other words, the provider's equipment is hidden from your equipment's view. Your switches don't actually see the service provider's switch; instead, it appears that all of your switches are connected together via a hub. When implementing TLS, you should remember that your MAN connection is an access link. Therefore, for traffic to traverse the MAN, you must put all of your sites in the same VLAN.



Trunk Link

A trunk link is a connection between two trunk-capable devices. These could be two switches, a switch and a router, or even a switch and an end station. Trunking basically extends the backplane of the switch. Normally, only traffic from one VLAN can be associated with a port. The exception to this is a trunk port. A trunk port allows multiple VLANs to cross it to a neighboring device, unlike an access link. Trunking is performed by encapsulating or tagging frames in hardware by the ASICs on each port. Encapsulating or tagging adds information, such as the VLAN number (referred to as the VLAN's color) to help in the forwarding of the frame by other switches.



Unicast

With unicasts, a separate packet must be sent to each destination. In a shared environment, every network device on the segment sees the packet, but only the actual destination processes it. In a switched environment, only devices on the source and destination segments actually see the frame.



Unidirectional Link Detection (UDLD)

UDLD checks to see whether unidirectional links exist between two switches and disables them. UDLD checks the physical configuration of the connection between two switches. Unidirectional connections can occur on a full-duplex connection (fiber and copper) if either the transmit or receive wire or circuit is broken. By shutting down the unidirectional connection, UDLD prevents inadvertent loops and black holes (that is, one switch is accessible but another is not).



UplinkFast

STP guarantees a loop-free environment; however, one large disadvantage of STP is the 30 50 second convergence time before redundant links can be used when failures occur. This is problematic in environments where real-time or bandwidth-intensive applications are deployed. UplinkFast, a Cisco-proprietary STP enhancement, allows the almost-immediate use of a redundant bridged connection (a blocked port) without recalculating STP when the primary path fails. This reduces the transition period from 50 seconds to less than 4 seconds. The name of the feature describes its purpose: It's used on uplink ports that connect access layer switches to distribution layer switches.



Virtual LAN (VLAN)

A VLAN can be described as a grouping of ports on a switch or a grouping of ports on different switches. It can also be characterized as a group of related users in a data network or as a group of users at the same geographic location (which is the most common).



VLAN ACL (VACL)

A VACL is used to restrict traffic within a VLAN or VLANs.



VLAN Membership Policy Server (VMPS)

A VMPS associates MAC addresses to VLANs. When a user connects to a switch and the switch sees the user's MAC address, the switch sends the user's MAC address to the VMPS server. The server responds back with the user's VLAN and the switch associates this VLAN with the user's interface.



Virtual RP

In HSRP, the role of the virtual RP is to provide a single RP that's always available to the end stations. It isn't a real RP because the IP and MAC addresses of the virtual RP are not physically assigned to any one interface on any of the RPs in the broadcast domain.



Virtual Router Redundancy Protocol (VRRP)

VRRP performs a function that's similar to Cisco's proprietary HSRP. VRRP is an open standard and is defined in IETF's RFC 2338. Like HSRP, VRRP has end stations use a virtual router for a default gateway. VRRP is supported for Ethernet media types as well as in VLANs and MPLS VPNs. VRRP can use either a virtual IP address or the interface address of the master router. If a virtual IP address is used, an election process takes place to choose a master router. The router with the highest priority is chosen as the master. All other routers are backup routers.



VLAN Trunk Protocol (VTP)

VTP is a Cisco-proprietary messaging protocol that occurs between devices on trunk ports. It allows VLAN information to be propagated across your switched network, providing a consistent VLAN configuration in your network. This process makes it easy to add, change, and delete VLANs as well as to add new devices to the network because your VLAN information is automatically propagated by switches that understand VTP on their trunk ports.



VTP Pruning

VTP pruning allows a switch to make more intelligent decisions concerning the forwarding of multicast, broadcast, and unknown destinations across trunk ports. VTP pruning is a method of traffic control that reduces unnecessary broadcast, multicast, and flooded unicast packets. This feature restricts traffic that is normally flooded out all trunks to only those trunk links where the connected switches (or other networking devices) also have ports in the associated VLAN.



Weighted Fair Queuing (WFQ)

WFQ examines traffic flows to determine how queuing occurs. A flow is basically a connection that Cisco calls a conversation. The IOS examines the Layer 3 protocol type (such as IP, ICMP, OSPF, and so on), the source and destination address, and the source and destination port numbers to determine how data should be classified. Based on this information, the traffic is either classified as high or low priority.



Weighted Random Early Detection (WRED)

WRED is an extension of RED and is used to avoid congestion. It does this by examining CoS information and begins dropping packets when traffic for a specified CoS reaches its configured threshold. This is done to reduce the likelihood that upcoming congestion will cause problems with important applications or data.



Weighted Round-Robin Queuing (WRRQ)

WRRQ is a queuing solution used on the egress ports of Layer 3 switches, such as the Catalyst 3550. Like RTP-PQ, WRRQ has four queues, and traffic is placed in the queues based on its IP precedence value. Each queue is assigned a weight value. Whenever congestion occurs in the egress direction of the port, the weight value is used to service the queues. Higher-priority queues (more weight) are given preference over lower-priority queues (less weight). However, no queue is ever starved. In other words, all queues get at least some bandwidth, but the higher-priority queues get more bandwidth than lower-priority queues. This is somewhat similar to CQ.





BCMSN Exam Cram 2 (Exam Cram 642-811)
CCNP BCMSN Exam Cram 2 (Exam Cram 642-811)
ISBN: 0789729911
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Richard Deal

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net