Ethernet Trunks

Ethernet Trunks

Most trunk implementations use Ethernet. You can construct Ethernet trunks using Fast Ethernet or Gigabit Ethernet, depending upon your bandwidth needs. EtherChannel (defined in greater detail in the sections that follow) creates additional bandwidth options by combining multiple Fast Ethernet or Gigabit Ethernet links. The combined links behave as a single interface, load distribute frames across each segment in the EtherChannel, and provide link resiliency.

Simply inter-connecting Catalysts with Ethernet does not create trunks. By default, you create an access link when you establish an Ethernet interconnection. When the port belongs to a single VLAN, the connection is not a trunk in the true sense as this connection never carries traffic from more than one VLAN.

To make a trunk, you must not only create a link, but you must enable trunk processes. To trunk over Ethernet between Catalysts, Cisco developed a protocol to multiplex VLAN traffic. The multiplexing scheme encapsulates user data and identifies the source VLAN for each frame. The protocol, called Inter-Switch Link (ISL), enables multiple VLANs to share a virtual link such that the receiving Catalyst knows in what VLAN to constrain the packet.

Tip

Trunks allow you to more easily scale your network than access links. However, be aware that Layer 2 broadcast loops (normally eliminated with Spanning Tree) for a VLAN carried on a trunk degrades all VLANs on the trunk. Be sure to enable Spanning Tree for all VLANs when using trunks.


The following sections describe EtherChannel and ISL. The physical layer aspects of EtherChannel are covered first followed by a discussion of ISL encapsulation.

EtherChannel

EtherChannel provides you with incremental trunk speeds between Fast Ethernet and Gigabit Ethernet, or even at speeds greater than Gigabit Ethernet. Without EtherChannel, your connectivity options are limited to the specific line rates of the interface. If you want more than the speed offered by a Fast Ethernet port, you need to add a Gigabit Ethernet module and immediately jump to this higher-speed technology. You do not have any intermediate speed options. Alternatively, you can create multiple parallel trunk links, but Spanning Tree normally treats these as a loop and shuts down all but one link to eliminate the loop. You can modify Spanning Tree to keep links open for some VLANs and not others, but this requires significant configurations on your part.

EtherChannel, on the other hand, allows you to build incremental speed links without having to incorporate another technology. It provides you with some link speed scaling options by effectively merging or bundling the Fast Ethernet or Gigabit Ethernet links and making the Catalyst or router use the merged ports as a single port. This simplifies Spanning Tree while still providing resiliency. EtherChannel resiliency is described later. Further, if you want to get speeds greater than 1 Gbps, you can create Gigabit EtherChannels by merging Gigabit Ethernet ports into an EtherChannel. With a Catalyst 6000 family device, this lets you create bundles up to 8 Gbps (16 Gbps full duplex).

Unlike the multiple Spanning Tree option just described, EtherChannel treats the bundle of links as a single Spanning Tree port and does not create loops. This reduces much of your configuration requirements simplifying your job.

EtherChannel works as an access or trunk link. In either case, EtherChannel offers more bandwidth than any single segment in the EtherChannel. EtherChannel combines multiple Fast Ethernet or Gigabit Ethernet segments to offer more apparent bandwidth than any of the individual links. It also provides link resiliency. EtherChannel bundles segments in groups of two, four, or eight. Two links provide twice the aggregate bandwidth of a single link, and a bundle of four offers four times the aggregate bandwidth. For example, a bundle of two Fast Ethernet interfaces creates a 400-Mbps link (in full-duplex mode). This enables you to scale links at rates between Fast Ethernet and Gigabit Ethernet. Bundling Gigabit Ethernet interfaces exceeds the speed of a single Gigabit Ethernet interface. A bundle of four Gigabit Ethernet interfaces can offer up to 8 Gbps of bandwidth. Note that the actual line rate of each segment remains at its native speed. The clock rate does not change as a result of bundling segments. The two Fast Ethernet ports comprising the 400-Mbps EtherChannel each operate at 100 Mbps (in each direction). The combining of the two ports does not create a single 200-Mbps connection. This is a frequently misunderstood aspect of EtherChannel technology.

EtherChannel operates as either an access or trunk link. Regardless of the mode in which the link is configured, the basic EtherChannel operation remains the same. From a Spanning Tree point of view, an EtherChannel is treated as a single port rather than multiple ports. When Spanning Tree places an EtherChannel in either the Forward or Blocking state, it puts all of the segments in the EtherChannel in the same state.

Bundling Ports

When bundling ports for EtherChannel using early EtherChannel-capable line modules, you must follow a couple of rules:

  • Bundle two or four ports.

  • Use contiguous ports for a bundle.

  • All ports must belong to the same VLAN. If the ports are used for trunks, all ports must be set as a trunk.

  • If you set the ports to trunk, make sure that all ports pass the same VLANs.

  • Ensure that all ports at both ends have the same speed and duplex settings.

  • You cannot arbitrarily select ports to bundle. See the following descriptions for guidelines.

These rules are generally applicable to many EtherChannel capable modules, however, some exceptions exist with later Catalyst modules. For example, the Catalyst 6000 line cards do not constrain you to use even numbers of links. You can create bundles with three links if you so choose. Nor do the ports have to be contiguous, or even on the same line card, as is true with some Catalyst devices and line modules. The previously mentioned exceptions of the Catalyst 6000 EtherChannel rules come from newer chipsets on the line modules. These newer chips are not present on all hardware. Be sure to check your hardware features before attempting to create any of these other bundle types.

Early EtherChannel-capable modules incorporate a chip called the Ethernet Bundling Controller (EBC) which manages aggregated EtherChannel ports. For example, the EBC manages traffic distribution across each segment in the bundled link. The distribution mechanism is described later in this section.

When selecting ports to group for an EtherChannel, you must select ports that belong to the same EBC. On a 24-port EtherChannel capable module, there are three groups of eight ports. On a 12-port EtherChannel capable module, there are three groups of four ports.

Table 8-1 shows 24- and 12-port groupings.

Table 8-1. 24-Port and 12-Port Groupings for EtherChannel

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

1

1

1

1

1

1

1

1

2

2

2

2

2

2

2

2

3

3

3

3

3

3

3

3

1

1

1

1

2

2

2

2

3

3

3

3

            

Table 8-2. Valid and Invalid 12-Port EtherChannel Examples (for Original Catalyst 5000 Implementations)

Port

 

1

2

3

4

5

6

7

8

9

10

11

12

Example A

OK

1

1

2

2

3

3

4

4

5

5

6

6

Example B

OK

1

1

  

2

2

  

3

3

  

Example C

OK

1

1

1

1

2

2

      

Example D

NOK

  

1

1

        

Example E

NOK

1

1

2

2

2

2

      

Example F

NOK

1

 

1

         

Example G

NOK

 

1

1

         

For example, in a 12-port module, you can create up to two dual segment EtherChannels within each group as illustrated in Example A of Table 8-2. Or, you can create one dual segment EtherChannel within each group as in Example B of Table 8-2. Example C illustrates a four-segment and a two-segment EtherChannel.

You must avoid some EtherChannel configurations on early Catalyst 5000 equipment. Example D of Table 8-2 illustrates an invalid two-segment EtherChannel using Ports 3 and 4 of a group. The EBC must start its bundling with the first ports of a group. This does not mean that you have to use the first group. In contrast, a valid dual segment EtherChannel can use Ports 5 and 6 with no EtherChannel on the first group.

Example E illustrates another invalid configuration. In this example, two EtherChannels are formed. One is a dual-segment EtherChannel, the other is a four-segment EtherChannel. The dual-segment EtherChannel is valid. The four-segment EtherChannel, however, violates the rule that all ports must belong to the same group. This EtherChannel uses two ports from the first group and two ports from the second group.

Example F shows an invalid configuration where an EtherChannel is formed with discontiguous segments. You must use adjacent ports to form an EtherChannel.

Finally, Example G shows an invalid EtherChannel because it does not use the first ports on the module to start the EtherChannel. You cannot start the EtherChannel with middle ports on the line module.

All of the examples in Table 8-2 apply to the 24-port modules too. The only difference between a 12- and 24-port module is the number of EtherChannels that can be formed within a group. The 12-port module allows only two EtherChannels in a group, whereas the 24-port module supports up to four EtherChannels per group.

One significant reason for constraining bundles within an EBC stems from the load distribution that the EBC performs. The EBC distributes frames across the segments of an EtherChannel based upon the source and destination MAC addresses of the frame. This is accomplished through an Exclusive OR (X-OR) operation. X-OR differs from a normal OR operation. OR states that when at least one of two bits is set to a 1, the result is a 1. X-OR means that when two bits are compared, at least one bit, but only one bit can have a value of 1. Otherwise, the result is a 0. This is illustrated in Table 8-3.

Table 8-3. Exclusive-OR Truth Table

Bit-1

Bit-2

Result

0

0

0

0

1

1

1

0

1

1

1

0

The EBC uses X-OR to determine over what segment of an EtherChannel bundle to transmit a frame. If the EtherChannel is a two-segment bundle, the EBC performs an X-OR on the last bit of the source and destination MAC address to determine what link to use. If the X-OR generates a 0, segment 1 is used. If the X-OR generates a 1, segment 2 is used. Table 8-4 shows this operation.

Table 8-4. Two-Segment Link Selection

MAC

Binary of Last Octet

Segment Used

Example 1

  

MAC Address 1

xxxxxxx0

 

MAC Address 2

xxxxxxx0

 

X-OR

xxxxxxx0

1

Example 2

  

MAC Address 1

xxxxxxx0

 

MAC Address 2

xxxxxxx1

 

X-OR

xxxxxxx1

2

Example 3

  

MAC Address 1

xxxxxxx1

 

MAC Address 2

xxxxxxx1

 

X-OR

xxxxxxx0

1

The middle column denotes a binary representation of the last octet of the MAC address. An x indicates that the value of that bit does not matter. For a two-segment link, only the last bit matters. Note that the first column only states Address 1 or Address 2. It does not specify which is the source or destination address. X-OR produces exactly the same result regardless of which is first. Therefore, Example 2 really indicates two situations: one where the source address ends with a 0 and the destination address ends in a 1, and the inverse. Frames between devices use the same link in both directions.

A four-segment operation performs an X-OR on the last two bits of the source and destination MAC address. An X-OR of the last two bits yields four possible results. As with the two-segment example, the X-OR result specifies the segment that the frame travels. Table 8-5 illustrates the X-OR process for a four-segment EtherChannel.

Note

Newer Catalyst models such as the 6000 series have the ability to perform the load distribution on just the source address, the destination address, or both. Further, they have the ability to use the IP address or the MAC addresses for the X-OR operation.

Some other models such as the 2900 Series XL perform X-OR on either the source or the destination MAC address, but not on the address pair.


Table 8-5. Four-Segment Link Selection

MAC

Binary of Last Octet

Segment Used

Example 1

  

MAC Address 1

xxxxxxx00

 

MAC Address 2

xxxxxxx00

 

X-OR

xxxxxxx00

1

Example 2

  

MAC Address 1

xxxxxxx00

 

MAC Address 2

xxxxxxx10

 

X-OR

xxxxxxx01

2

Example 3

  

MAC Address 1

xxxxxxx01

 

MAC Address 2

xxxxxxx10

 

X-OR

xxxxxxx10

3

Example 4

  

MAC Address 1

xxxxxx01

 

MAC Address 2

xxxxxx10

 

X-OR

xxxxxx11

4

Example 5

  

MAC Address 1

xxxxxx11

 

MAC Address 2

xxxxxx11

 

X-OR

xxxxxx00

1

The results of Examples 1 and 5 force the Catalyst to use Segment 1 in both cases because the X-OR process yields a 0.

The end result of the X-OR process forces a source/destination address pair to use the same link for each frame they transmit. What prevents a single segment from becoming overwhelmed with traffic? Statistics. Statistically, the MAC address assignments are fairly random in the network. A link does not likely experience a traffic loading imbalance due to source/destination MAC address values. Because the source and destination use the same MAC address for every frame between each other, the frames always use the same EtherChannel segment. It is possible, too, that a workstation pair can create a high volume of traffic creating a load imbalance due to their application. The X-OR process does not remedy this situation because it is not application aware.

Tip

Connecting RSMs together with a Catalyst EtherChannel might not experience load distribution. This occurs because the RSM MAC addresses remain the same for every transmission, forcing the X-OR to use the same segment in the bundle for each frame. However, you can force the RSM to use multiple user-assigned MAC addresses, one for each VLAN, with the mac-address command. This forces the switch to perform the X-OR on a per-VLAN basis and enable a level of load distribution.


Configuring EtherChannel and PAgP

To simplify the configuration of EtherChannel, Cisco created the Port Aggregation Protocol (PAgP). This protocol helps to automatically form an EtherChannel between two Catalysts. PAgP can have any of four states: on, off, auto, desirable. You specify which PAgP state the Catalyst should enable when you configure EtherChannel. Example 8-1 shows the syntax to create an EtherChannel and determine the PAgP mode.

Example 8-1 EtherChannel Syntax Example
   Console> (enable) set port channel ?   Usage: set port channel port_list {on|off|auto|desirable}     (example of port_list: 2/1-4 or 2/1-2 or 2/5,2/6) 

The set port channel command enables EtherChannel. It does not establish a trunk. With only this configuration statement, a single VLAN crosses the EtherChannel. To enable a trunk, you must also enter a set trunk command. The set trunk command is described in following sections.

The on and off options indicate that the Catalyst always (or never) bundles the ports as an EtherChannel. The desirable option tells the Catalyst to enable EtherChannel as long as the other end agrees to configure EtherChannel and as long as all EtherChannel rules are met. For example, all ports in the EtherChannel must belong to the same VLAN, or they must all be set to trunk. All ports must be set for the same duplex mode. If any of the parameters mismatch, PAgP refuses to enable EtherChannel. The auto option allows a Catalyst to enable EtherChannel if the other end is set as either on or desirable. Otherwise, the Catalyst isolates the segments as individual links.

Figure 8-6 shows two Catalysts connected with two Fast Ethernet segments. Assume that you desire to enable EtherChannel by bundling the two segments.

Figure 8-6. A Catalyst 5000 and a Catalyst 5500 Connected with EtherChannel

graphics/08fig06.gif

Examples 8-2 and 8-3 show sample configurations for both Cat-A and Cat-B.

Example 8-2 A Two-Port EtherChannel Configuration for Cat-A
   Cat-A> (enable) set port channel 2/1-2 on   Port(s) 2/1-2 channel mode set to on. 
Example 8-3 A Two-Port EtherChannel Configuration for Cat-B
   Cat-B> (enable) set port channel 10/1-2 on   Port(s) 10/1-2 channel mode set to on. 

Tip

Note that when you enable PAgP on a link where Spanning Tree is active, Spanning Tree takes about 18 more seconds to converge. This is true because PAgP takes about 18 seconds to negotiate a link. The link negotiation must be completed before Spanning Tree can start its convergence algorithm.


Tip

If you change an attribute on one of the EtherChannel segments, you must make the same change on all of the segments for the change to be effective. All ports must be configured identically.


EtherChannel and Routers

Enabling EtherChannel on a Cisco router requires you to define a virtual channel and then to associate specific interfaces to the channel. Up to four EtherChannels can be created in a router. The router views the EtherChannel as a single interface. Example 8-4 shows a configuration for a Cisco 7200 series router. You assign logical addresses to a bundle, not to individual segments in the EtherChannel. The router views the EtherChannel as a single interface.

Example 8-4 7200 Series EtherChannel Configuration Session Example
 Router# config terminal ! This creates the virtual channel Router(config)# interface port-channel 1 ! Assign attribututes to the channel just like to a real interface. Router(config-if)# ip address 10.0.0.1 255.0.0.0 Router(config-if)# ip route-cache distributed Router(config-if)# exit !Configure the physical interfaces that comprise the channel Router(config)# interface fasteth 0/0 Router(config-if)# no ip address !This statement  assigns fasteth 0/0 to the EtherChannel Router(config-if)# channel-group 1 %LINEPROTO-5-UPDOWN: Line protocol on Interface Port-Channel1, changed to UP Router(config-if)# exit !You must have at least two interfaces to form an EtherChannel Router(config-if)# interface fasteth 0/1 Router(config-if)# no ip address Router(config-if)# channel-group 1 FastEthernet 0/1 added as member-2 to fechannel1 

In the Catalyst, hardware forms an EtherChannel. In most of the routers, an EtherChannel is formed in software. Unlike the Catalyst, therefore, the router interfaces do not need to be contiguous. However, it might make it administratively easier for you if they are.

Load distribution in a router happens differently than for the Catalyst. Rather than distributing frames based upon the MAC addresses, the router performs an X-OR on the last two bits of the source and destination IP address. Theoretically, you should be able to maintain load balancing with this, but because IP addresses are locally administered (you assign them) you can unintentionally assign addresses with a scheme that might favor one link or another in the EtherChannel. If you have EtherChannel to a router, evaluate your IP address assignment policy to see if you are doing anything that might prevent load distribution. If you use protocols other than IP, all non-IP traffic uses a single link. Only the IP traffic experiences load distribution.

If you have a Layer 3 switch such as the Catalyst 8500 series switch/router, it can perform load balancing based upon the IP address and upon an IPX address. Because IPX incorporates the station's MAC address as part of the logical address, load distribution occurs just like it does for any other Catalyst, based upon the MAC address. As mentioned previously, this ensures a fairly high degree of randomness for load distribution, but cannot guarantee load balancing. A particular workstation/server pair can create a high bandwidth load. All of the frames for that pair always cross the same link even if another link in the EtherChannel remains unused. Load distribution is not based upon bandwidth utilization.

EtherChannel Resiliency

What happens when an EtherChannel segment fails? When a Catalyst detects a segment failure, it informs the Encoded Address Recognition Logic (EARL) ASIC on the Supervisor module. The EARL is a special application-specific integrated circuit that learns MAC addresses. In essence, the EARL is the learning and address storage device creating the bridge tables discussed in Chapter 3. The EARL ages any addresses that it learned on that segment so it can relearn address pairs on a new segment in the bundle. On what segment does it relearn the source? In a two-segment EtherChannel, frames must cross the one remaining segment. In a four- or eight-segment bundle, traffic migrates to the neighboring segment.

When you restore the failed segment, you do not see the traffic return to the original segment. When the segment fails, the EARL relearns the addresses on a new link. Until addresses age out of the bridge table, the frames continue to cross the backup link. This requires that the stations not transmit for the duration of the bridge aging timer. You can manually clear the bridge table, but that forces the Catalyst to recalculate and relearn all the addresses associated with that segment.

EtherChannel Development

EtherChannel defines a bundling technique for standards-based segments such as Fast Ethernet and Gigabit Ethernet. It does not cause the links to operate at clock rates different than they were without bundling. This makes the segments non Fast Ethernet- or Gigabit Ethernet-compliant. EtherChannel enables devices to distribute a traffic load over more than one segment while providing a level of resiliency that does not involve Spanning Tree or other failover mechanisms. The IEEE is examining a standards-based approach to bundling in the 802.3ad committee.

ISL

When multiplexing frames from more than one VLAN over a Fast Ethernet or Fast EtherChannel, the transmitting Catalyst must identify the frame's VLAN membership. This allows the receiving Catalyst to constrain the frame to the same VLAN as the source, thereby maintaining VLAN integrity. Otherwise, the frame crosses VLAN boundaries and violates the intention of creating VLANs.

Cisco's proprietary Inter-Switch Link (ISL) encapsulation enables VLANs to share a common link between Catalysts while allowing the receiver to separate the frames into the correct VLANs.

When a Catalyst forwards or floods a frame out an ISL enabled trunk interface, the Catalyst encapsulates the original frame identifying the source VLAN. Generically, the encapsulation looks like Figure 8-7. When the frame leaves the trunk interface at the source Catalyst, the Catalyst prepends a 26-octet ISL header and appends a 4-octet CRC to the frame. This is called double-taggingortwo-level tagging encapsulation.

Figure 8-7. ISL Double-Tagging Encapsulation

graphics/08fig07.gif

The ISL header looks like that described in Table 8-6.

Table 8-6. ISL Encapsulation Description

Octet

Description

DA

A 40-bit multicast address with a value of 0x01-00-0C-00-00 that indicates to the receiving Catalyst that the frame is an ISL encapsulated frame.

Type

A 4-bit value indicating the source frame type. Values include 0 0 0 0 (Ethernet), 0 0 0 1 (Token Ring), 0 0 1 0 (FDDI), and 0 0 1 1 (ATM).

User

A 4-bit value usually set to zero, but can be used for special situations when transporting Token Ring.

SA

The 802.3 MAC address of the transmitting Catalyst. This is a 48-bit value.

Length

The LEN field is a 16-bit value indicating the length of the user data and ISL header, but excludes the DA, Type, User, SA, and Length and ISL CRC bytes.

SNAP

A three-byte field with a fixed value of 0xAA-AA-03.

HSA

This three-byte value duplicates the high order bytes of the ISL SA field.

VLAN

A 15-bit value to reflect the numerical value of the source VLAN that the user frame belongs to. Note that only 10 bits are used.

BPDU

A single-bit value that, when set to 1, indicates that the receiving Catalyst should immediately examine the frame as an end station because the data contains either a Spanning Tree, ISL, VTP, or CDP message.

Index

The value indicates what port the frame exited from the source Catalyst.

Reserved

Token Ring and FDDI frames have special values that need to be transported over the ISL link. These values, such as AC and FC, are carried in this field. The value of this field is zero for Ethernet frames.

User Frame

The original user data frame is inserted here including the frame's FCS.

CRC

ISL calculates a 32-bit CRC for the header and user frame. This double-checks the integrity of the message as it crosses an ISL trunk. It does not replace the User Frame CRC.

ISL trunk links can carry traffic from LAN sources other than Ethernet. For example, Token Ring and FDDI segments can communicate across an ISL trunk. Figure 8-8 shows two Token Rings on different Catalysts that need to communicate with each other. Ethernet-based VLANs also exist in the network. The connection between the Catalysts is an Ethernet trunk.

Figure 8-8. Using Token Ring ISL (TRISL) to Transport Token Ring Over an Ethernet Trunk

graphics/08fig08.gif

Unfortunately, Token Ring attributes differ significantly from Ethernet. Differences between Token Ring and Ethernet include the following:

  • Frame sizes Token Ring supports frames both smaller and extremely larger than Ethernet.

  • Routing Information Field Token Ring frames can include an RIF which is meaningless in an Ethernet system.

  • Explorer frames Token Ring stations can transmit an explorer frame to discover the relative location of a destination device. This frame type includes a bit indicating that the encapsulated frame is an explorer.

These differences make transporting Token Ring frames over an Ethernet segment challenging at the least.

To effectively transport Token Ring frames over an Ethernet link, the Catalyst must deal with each of these issues.

When Cisco developed ISL, it included a provision for Token Ring and FDDI over Ethernet. The ISL header includes a space for carrying Token Ring- and FDDI-specific header information. These are carried in the Reserved field of the ISL header.

When specifically dealing with Token Ring over ISL, the encapsulation is called Token Ring ISL (TRISL). TRISL adds seven octets to the standard ISL encapsulation to carry Token Ring information. The trunk passes both ISL- and TRISL-encapsulated frames.

Dynamic ISL (DISL)

Two Catalysts interconnected with a Fast Ethernet, Gigabit Ethernet, Fast EtherChannel, or Gigabit EtherChannel can operate in a non-trunk mode using access links. When so configured, the traffic from only one VLAN passes over the link. More often, however, you desire to transport traffic from more than one VLAN over the link. Multiplexing the data from the different VLANs over the link requires ISL as described in the previous section. Both ends must agree upon enabling ISL to successfully trunk over the Fast Ethernet or Fast EtherChannel link. If one end enables ISL and the other end disables ISL, the packet encapsulation mismatch prevents successful user data communication over the link. One end generates encapsulated frames and expects to see encapsulated frames, whereas the other end expects the inverse. Conversely, the non-trunking end transmits unencapsulated frames, whereas the receiving trunking end looks for encapsulation, but does not see it and rejects the frame.

In the earliest versions of Catalyst code, you had to manually enable ISL at both ends of the link. With release 2.1 of the Catalyst software, an automatic method of enabling ISL was introduced requiring you to only configure one end of a link. The Cisco proprietary Dynamic Inter-Switch Link (DISL) protocol enables a Catalyst to negotiate with the remote side of a point-to-point Fast Ethernet, Gigabit Ethernet, or EtherChannel to enable or disable ISL. DISL, a data link layer protocol, transmits ISL configuration information with a destination MAC multicast address of 01-00-0C-CC-CC-CC. Note that Cisco uses this multicast address for several proprietary protocols. Cisco uses a different SNAP value, though, to distinguish the packet's purpose. For example, CDP uses the multicast address and a SNAP value of 0x2000, whereas DISL uses the multicast with a SNAP value of 0x2004. When a Catalyst receives a frame with this destination address, it does not forward the frame out any interface. Rather, it processes the frame on the Supervisor module.

A Catalyst trunk (both Fast Ethernet and Gigabit Ethernet) interface can support one of five trunk modes: off, on, desirable, auto, or nonegotiate. When set to off, on, auto, or desirable, the Catalyst sends ISL configuration frames every 30 seconds to ensure that the other end synchronizes to the current configuration. The syntax to enable trunking is as follows:

 set trunk mod_num/port_num [on | desirable | auto | nonegotiate] 

Note that off is not listed because it disables trunking as described below.

When configured as off, the interface locally disables ISL and negotiates (informs) the remote end of the local state. If the remote end configuration allows dynamic trunk state changes (auto or desirable), it configures itself as a non-trunk. If the remote side cannot change state (such as when configured to on), the local unit still disables ISL. Additionally, if the local unit is configured as off and it receives a request from the remote Catalyst to enable ISL, the local Catalyst refuses the request. Setting the port to off forces the interface to remain off, regardless of the ISL state at the remote end. Use this mode whenever you don't want an interface to be a trunk, but want it to participate in ISL negotiations to inform the remote side of its local policy.

On the other hand, if the local interface configuration is on, the Catalyst locally enables ISL and negotiates (informs) the remote side of the local state. If the remote side configuration is auto or desirable, the link enables trunking and ISL encapsulation. If the remote end state is off, the link never negotiates to an enabled trunk mode. The local Catalyst enables trunking while the remote end remains disabled. This creates an encapsulation mismatch preventing successful data transfers. Use trunk mode on when the remote end supports DISL, and when you want the local end to remain in trunk mode regardless of the remote end's mode.

The desirable mode causes a Catalyst interface to inform the remote end of its intent to enable ISL, but does not actually enable ISL unless the remote end agrees to enable it. The remote end must be set in the on, auto, or desirable mode for the link to establish an ISL trunk. Do not use the desirable mode if the remote end does not support DISL.

Note

Not all Catalysts, such as the older Catalyst 3000 and the Catalyst 1900, support DISL. If you enable the Catalyst 5000 end as desirable and the other end does not support DISL, a trunk is never established. Only use the desirable mode when you are confident that the remote end supports DISL, and you want to simplify your configuration requirements.


Configuring a Catalyst in auto mode enables the Catalyst to receive a request to enable ISL trunking and to automatically enter that mode. The Catalyst configured in auto never initiates a request to create a trunk and never becomes a trunk unless the remote end is configured as on or desirable. The auto mode is the Catalyst default configuration. If when enabling a trunk you do not specify a mode, auto is assumed. A Catalyst never enables trunk mode when left to the default values at both ends. When one end is set as auto, you must set the other end to either on or desirable to activate a trunk.

The nonegotiate mode establishes a Catalyst configuration where the Catalyst enables trunking, but does not send any configuration requests to the remote device. This mode prevents the Catalyst from sending DISL frames to set up a trunk port. Use this mode when establishing a trunk between a Catalyst and a router to ensure that the router does not erroneously forward the DISL requests to another VLAN component. You should also use this whenever the remote end does not support DISL. Sending DISL announcements over the link is unproductive when the receiving device does not support it.

Table 8-7 shows the different combinations of trunk modes and the corresponding effect.

Table 8-7. Results of Mixed DISL Modes

Local Mode Remote Mode

off

on

auto

desirable

nonegotiate

off

Local:off

Remote:off

Local:on

Remote:off

Local:off

Remote:off

Local:off

Remote:off

Local:on

Remote:off

on

Local:off

Remote:on

Local:on

Remote:on

Local:on

Remote:on

Local:on

Remote:on

Local:on

Remote:on

auto

Local:off

Remote:off

Local:on

Remote:on

Local:off

Remote:off

Local:on

Remote:on

Local:on

Remote:off

desirable

Local:off

Remote:off

Local:on

Remote:on

Local:on

Remote:on

Local:on

Remote:on

Local:on

Remote:on

nonegotiate

Local:off

Remote:on

Local:on

Remote:on

Local:off

Remote:on

Local:on

Remote:on

Local:on

Remote:on

With all of these combinations, the physical layer might appear to be operational. If you do a show port, the display indicates connected. However, that does not necessarily mean that the trunk is operational. If both the remote and local sides of the link do not have the same indication (on or off), you cannot transmit any traffic due to encapsulation mismatches. Use the show trunk command to examine the trunk status. For example, in Table 8-7, the combination on/auto results in both sides trunking. The combination auto/auto results in both sides remaining configured as access links. Therefore, trunking is not enabled. Both of these are valid in that both ends agree to trunk or not to trunk. However, the combination on/off creates a situation where the two ends of the link disagree about the trunk condition. Both sides pass traffic, but neither side can decode the received traffic. This is because of the encapsulation mismatch that results from the disagreement. The end with trunking enabled looks for ISL encapsulated frames, but actually receives nonencapsulated frames. Likewise, the end that is configured as an access link looks for nonencapsulated Ethernet frames, but sees encapsulation headers that are not part of the Ethernet standard and interpret these as errored frames. Therefore, traffic does not successfully transfer across the link.

Do not get confused between DISL and PAgP. In the section on EtherChannel, PAgP was introduced. PAgP allows two Catalysts to negotiate how to form an EtherChannel between them. PAgP does not negotiate whether or not to enter trunk mode. This is the domain of DISL and Dynamic Trunk Protocol (DTP). DTP is a second generation of DISL and allows the Catalysts to negotiate whether or not to use 802.1Q encapsulation. This is discussed further in a later section in this chapter. On the other hand, note that DISL and DTP do not negotiate anything about EtherChannel. Rather, they negotiate whether to enable trunking.

Tip

It is best to hard code the trunk configuration on critical links between Catalysts such as in your core network, or to critical servers that are trunk attached.


Tip

If you configure the Catalyst trunk links for dynamic operations (desirable, auto), ensure that both ends of the link belong to the same VTP management domain. If they belong to different domains, Catalysts do not form the trunk link.


802.1Q/802.1p

In an effort to provide multivendor support for VLANs, the IEEE 802.1Q committee defined a method for multiplexing VLANs in local and metropolitan area networks. The multiplexing method, similar to ISL, offers an alternative trunk protocol in a Catalyst network. Like ISL, 802.1Q explicitly tags frames to identify the frame's VLAN membership. The tagging scheme differs from ISL in that ISL uses an external tag, and 802.1Q uses an internal tag.

The IEEE also worked on a standard called 802.1p. 802.1p allows users to specify priorities for their traffic. The priority value is inserted into the priority field of the 802.1Q header. If a LAN switch supports 802.1p, the switch might forward traffic flagged as higher priority before it forwards other traffic.

ISL's external tag scheme adds octets to the beginning and to the end of the original data frame. Because information is added to both ends of a frame, this is sometimes called double-tagging. (Refer back to Table 8-6 for ISL details.) 802.1Q is called an internal tagscheme because it adds octets inside of the original data frame. In contrast to double-tagging, this is sometimes called a single-tag scheme. Figure 8-9 shows an 802.1Q tagged frame.

Figure 8-9. 802.1Q/802.1p Frame Tagging Compared to ISL

graphics/08fig09.gif

The following bullets describe each of the fields in the 802.1Q header illustrated in Figure 8-9:

  • TPID (Tag Protocol Identifier) This indicates to the receiver that an 802.1Q tag follows. The value for the TPID is a hexadecimal value of 0x8100.

  • Priority This is the 802.1p priority field. Eight priority levels are defined in 802.1p and are embedded in the 802.1Q header.

  • CFI (Canonical format indicator) This single bit indicates whether or not the MAC addresses in the MAC header are in canonical (0) or non-canonical (1) format.

  • VID (VLAN Identifier) This indicates the source VLAN membership for the frame. The 12-bit field allows for VLAN values between 0 and 4095. However, VLANs 0, 1, and 4095 are reserved.

An interesting situation arises from the 802.1Q tag scheme. If the tag is added to a maximum sized Ethernet frame, the frame size exceeds that specified by IEEE 802.3. To carry the tag in a maximum sized Ethernet frame requires 1522 octets, four more than the specification allows. The 802.3 committee created a workgroup, 802.3ac, to extend Ethernet's maximum frame size to 1522 octets.

If you have equipment that does not support the larger frame size, it might complain if it receives these oversized frames. These frames are sometimes called baby giants.

802.1Q, ISL, and Spanning Tree

When Cisco introduced switched LAN solutions, it recognized the possibility of a complex Catalyst topology. Consequently, Cisco supports multiple instances of Spanning Tree. You can create a different Spanning Tree topology for every VLAN in your network where each VLAN can have a different Catalyst for a Root Bridge. This allows you to optimize the bridged network topology for each VLAN. The selection of a Root Bridge for VLAN 10 might not be the best choice for VLAN 11, or any VLAN other than VLAN 10. Cisco's capability to support multiple instances of Spanning Tree in the Catalyst is called Per-VLAN Spanning Tree (PVST).

802.1Q, however, defines a single instance of Spanning Tree for all VLANs. All VLANs have the same Root Bridge in an 802.1Q network. This is called a Mono Spanning Tree (MST) topology. 802.1Q does not exclude the use of more than one instance of Spanning Tree, it just does not address the issues of how to support it.

A complication could arise in a hybrid ISL and 802.1Q environment. Without any special provisions, you need to restrict your Spanning Tree topology to a common topology for all VLANs. Cisco developed PVST+ which allows you to retain multiple Spanning Tree topologies, even in an 802.1Q mixed vendor environment. PVST+ tunnels PVST frames through the 802.1Q MST Spanning Tree network as multicast frames. Cisco uses the multicast address 01-00-0C-CC-CC-CD for PVST+. Unlike 802.1Q, PVST+ enables you to reuse a MAC address in multiple VLANs. If you have devices that need to do this, you need to use ISL and PVST+. Chapter 7, "Advanced Spanning Tree," provides more details on PVST+.

Configuring 802.1Q

Configuration tasks to enable 802.1Q trunks include the following:

  1. Specify the correct encapsulation mode (ISL or 802.1Q) for the trunk.

  2. Enable the correct DTP trunking mode or manually ensure that both ends of the link support the same trunk mode.

  3. Select the correct native VLAN-id on both ends of the 802.1Q trunk.

The following syntax enables an 802.1Q trunk on a Catalyst:

 set trunk mod_num/port_num [on|desirable|auto|nonegotiate] dot1q 

dot1q specifies the trunk encapsulation type. Specifically, it enables the trunk using 802.1Q encapsulation. This is an optional field for ISL trunks, but mandatory if you want dot1q. Of course, if you want an ISL trunk, you do not use dot1q, but rather ISL. If you do not specify the encapsulation type, the Catalyst uses the default value (ISL). Not all modules support both ISL and 802.1Q modes. Check current Cisco documentation to determine which modes your hardware supports. Further, not all versions of the Catalyst software support 802.1Q. Only since version 4.1(1) does the Catalyst 5000 family support dot1q encapsulation. Automatic negotiation of the encapsulation type between the two ends of the trunk was not available until version 4.2(1) of the Catalyst 5000 software. 4.2(1) introduced DTP, which is described in the following section. Prior to 4.2(1), you must manually configure the trunk mode.

Example 8-5 shows a sample output for configuring Port 1/1 for dot1q encapsulation. This works whether the interface is Fast Ethernet or Gigabit Ethernet.

Example 8-5 Sample Catalyst Configuration for 802.1Q Trunk
   Console> (enable) set trunk 1/1 desirable dot1q   Port(s) 1/1 trunk mode set to desirable.   Port(s) 1/1 trunk type set to dot1q.   Console> (enable) 11/11/1998,23:03:17:DTP-5:Port 1/1 has become dot1q trunk 

Enabling 802.1Q trunks on a router is similar to enabling ISL. Like ISL, you must include an encapsulation statement in the interface configuration. Example 8-6 shows a sample router configuration.

Example 8-6 Sample Router Configuration for 802.1Q
   ! Specify the interface to configure   interface fastether 2/0.1     ip address 172.16.10.1 255.255.255.0     ipx network 100     encapsulation dot1q 200 

The number at the end of the encapsulation statement specifies the VLAN number. The 802.1Q specification allows VLAN values between 0 and 4095 (with reserved VLAN values as discussed previously). However, a Catalyst supports VLAN values up to 1005. Generally, do not use values greater than 1005 when specifying the 802.1Q VLAN number to remain consistent with Catalyst VLAN numbers. Note that newer code releases allow you to map 802.1Q VLAN numbers into the valid ISL number range. This is useful in a hybrid 802.1Q/ISL environment by enabling you to use any valid 802.1Q value for 802.1Q trunks, while using valid ISL values on ISL trunks.

Dynamic Trunk Protocol (DTP)

802.1Q offers an alternative to Cisco's proprietary ISL encapsulation protocol. That means a Fast Ethernet/EtherChannel link now has even more possible combinations because a trunk can use ISL encapsulation or 802.1Q tags. Just like ISL, 802.1Q trunks can be set for on, off, desirable, or auto. Both ends of a link must, however, be either in ISL or in 802.1Q mode. With version 4.1, you need to manually configure the encapsulation mode at both ends to make them compatible. In release 4.2, Cisco introduced a new link negotiation protocol called Dynamic Trunk Protocol (DTP) which enhances DISL functionality. DTP negotiates the two ends of the link to a compatible mode, reducing the possibility of incompatibly when configuring a link. Note the highlighted DTP message in Example 8-7 indicating that the interface became a trunk. If you select an ISL trunk, DTP reports the action if you have software release 4.2 or later as shown in the output in Example 8-7. Note that PAgP also reports messages. Although PAgP sets up EtherChannel, it reports port status even for non-EtherChannel segments.

Example 8-7 DTP Message When Establishing an ISL Trunk
   Port(s) 1/1 trunk mode set to on.   Console> (enable) 11/12/1998,17:56:39:DTP-5:Port 1/1 has become isl trunk   11/12/1998,17:56:40:PAGP-5:Port 1/1 left bridge port 1/1.   11/12/1998,17:56:40:PAGP-5:Port 1/1 joined bridge port 1/1 
Restricting VLANs on a Trunk

You can elect to restrict what VLANs can cross a trunk. By default, the Catalyst is authorized to transport all VLANs over a trunk. You might want, instead, to allow only VLANs 5 10 over a trunk. You can specify the VLANs to transport as part of the set trunk command. Or, you can remove authorized VLANs from a trunk with the clear trunk command. Example 8-8 shows an example of clearing VLANs from a trunk and adding VLANs.

Example 8-8 Modifying Authorized VLANs on a Trunk
   Console> (enable) clear trunk 1/1 10-20   Removing Vlan(s) 10-20 from allowed list.   Port 1/1 allowed vlans modified to 1-9,21-1005.   Console> (enable) set trunk 1/1 15   Adding vlans 15 to allowed list 

At the end of Example 8-8, the complete list of allowed VLANs is 1-9, 15, 21-1005.

You can use these commands on any trunk, regardless of its tagging mode. Note that if you enter these commands on an EtherChannel trunk, the Catalyst modifies all ports in the bundle to ensure consistency. Ensure that you configure the remote link to carry the same set of VLANs.



Cisco(r) LAN Switching
Cisco Catalyst LAN Switching
ISBN: B00007FYCI
EAN: N/A
Year: 2005
Pages: 223

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net