Performance management of system interfaces really boils down to monitoring bandwidth utilization, which means counting the octets that flow across a given interface and measuring those counts against the bandwidth of the interface. This way, you can determine whether you are overutilizing a particular link and when you need to increase the capacity of a link. In Chapter 4, "Performance Measurement and Reporting," we discussed the method for calculating link utilization:
Other data that are generally of interest are the total amount of broadcast and multicast traffic as they relate to overall traffic. Originally in the ifTable in RFC 1213 there were counters for unicast packets and non-unicast packets: ifInUcastPkts, ifOutUcastPkts, ifInNUcastPkts, and ifOutNUcastPkts. These counters lumped both multicast and broadcast traffic into one counter. As more and more applications use IP multicast, it is important to have counters that give the resolution to distinguish between multicast and broadcast traffic. RFC 1573 (later superceded by RFC 2233) defines new counters in the interface extension table (ifXTable): ifInMulticastPkts, ifInBroadcastPkts, ifOutMulticastPkts, ifOutBroadcastPkts. These counters are supported starting in Cisco Catalyst switches version 3.1 and Cisco routers IOS version 12.0. Performance Measurements for Full-Duplex InterfacesFor full-duplex interfaces, you will want to monitor input and output traffic separately. You often hear that the effective bandwidth of a full-duplex link is twice the transmission speed. For example, a 100 Mbps, full-duplex Ethernet link actually has an effective bandwidth of 200 Mbps because if both directions were fully utilized, there would be 200 megabits of data flowing on the link each second. But this bit of common wisdom needs more careful scrutiny. For example, consider a 100 Mbps, full-duplex interface in which most of the traffic is one-way. It may be in the path to an FTP server. Most of the traffic to the FTP server is relatively light, consisting of requests for a given file. The traffic in the other direction from the FTP server consists of the actual files and is much heavier. When is this interface fully utilized? You might be tempted to add the transmitted and received traffic rates and divide by 200 Mbps. But that result would be misleading. For example, if the traffic to the FTP server is running an average of 5 Mbps and the traffic from the server is pegged at 100 Mbps, you might conclude that the interface has just over 50 percent utilization. But we know that one direction from the FTP server is fully loaded at 100 Mbps. There is really no more capacity available on this link. It is 100 percent loaded, not 50 percent loaded. You must take the maximum traffic rate either input or output and divide it by the transmission speed of the interface. See "Measuring Utilization" in Chapter 4 for more information on calculating the utilization. Performance Measurements for Sub-interfacesA common question is how to measure utilization of sub-interfaces. However, because utilization is measured against the bandwidth, the question becomes "What is the bandwidth of a sub-interface?" That is a hard question to answer. It does not make sense to measure multiple sub-interfaces against the bandwidth of the same physical interface. The utilization of several sub-interfaces may all appear quite low, which leads one to believe that all is okay. Combined together, however, they might total almost 100%. However, it is of interest to know the amount of traffic flowing through a sub-interface. The easiest way to measure sub-interface traffic is through the sub-interface table. Depending on the speed of the interface, you can use either ifInoctets and ifOutOctets; for high-speed interfaces (greater than 100 Mbps), you can use ifHCInOctets and ifHCOutOctets. An alternate way to measure sub-interface traffic is to use the media-specific MIB. For example, using the frCircuitTable from the Frame Relay MIB is a way to measure PVC traffic. However, it is often easier to use the same method for each interface because you can reuse the same scripts to poll the data. Only the indexing changes from interface to interface. In the next few sections, we go over methods to retrieve performance data, either via the command line or via SNMP from the MIB. For reporting and alarm purposes, the data gathered via SNMP requires less processing and is easier to gather. The show commands are better used to troubleshoot a problem on a particular router or switch. They are very useful for drilling down to the cause of a particular problem. MIB Variables for Interface TrafficThere are several MIB variables to watch for interface traffic. From MIB rfc2233, the relevant MIBs are as follows:
From MIB RFC 1213, the relevant MIB object is sysUpTime, which returns the number of time ticks counted since the device was initialized. This MIB object is needed on each poll to determine whether the interface counters rolled over. From MIB CISCO-STACK-MIB, the relevant MIB objects are sysClearMacTime and/or sysClearPortTime, which return the number of time ticks since the counters were cleared. These MIB objects are needed on each poll for Catalyst switches to determine whether the counters were cleared.
CLI Commands for Interface TrafficThe show interface command gives complete performance data for a given interface. Example 12-1 demonstrates its usage. Example 12-1 Using the show interface command to obtain interface data. Router# show interfaces Ethernet 0 Ethernet 0 is up, line protocol is up Hardware is MCI Ethernet, address is aa00.0400.0134 (bia 0000.0c00.4369) Internet address is 131.108.1.1, subnet mask is 255.255.255.0 MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec, rely 255/255, load 1/255 Encapsulation ARPA, loopback not set, keepalive set (10 sec) ARP type: ARPA, PROBE, ARP Timeout 4:00:00 Last input 0:00:00, output 0:00:00, output hang never Last clearing of "show interface" counters neverC Output queue 0/40, 0 drops; input queue 0/75, 2 drops Five minute input rate 61000 bits/sec, 4 packets/secA Five minute output rate 1000 bits/sec, 2 packets/sec 2295197 packets inputB, 305539992 bytes, 0 no buffer Received 1925500 broadcasts, 0 runts, 0 giants 3 input errors, 3 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort 0 input packets with dribble condition detected 3594664 packets output, 436549843 bytes, 0 underruns output errors, 1790 collisions, 10 interface resets, 0 restarts The annotated information in Example 12-1 is as follows:
|