Hardware Characteristics of Routers


The performance data you can gather for hardware in routers and switches is minimal. But one aspect of hardware that indirectly affects performance is effective backplane utilization versus backplane oversubscription. This is true, especially on the core high-end routers and switches, such as the 7513 router or 5500 switch, respectively. Physical card positioning in these devices is crucial for achieving optimal performance in the network. Because these devices are typically placed in the core portion of the network, it is important to understand the interdependencies between the card positioning and the backplane bus architecture of the device. By understanding the architecture and physical layout of the devices, you also can manage assets more effectively.

We'll first look briefly, as an example, at the 7513 series router backplane bus architecture and the dependencies with the interface processors (IPs) and their respective slot positions. Based on these dependencies, you can reference the appropriate MIBs and show commands, as defined in sections of this chapter, to draw a correlation between the slot position or card type and the relative bus speeds on the backplane of the chassis. We'll then consider some of the specifics of the hardware characteristics, such as hardware buffer carving and IDBs (Interface Descriptor Blocks). Understanding how these specific details apply to the different type of interface cards will assist you in determining the appropriate kind of hardware to place in the appropriate location in the chassis.

Backplane Bus Architecture in the 7x00 Series Routers

The 7513 and 7507 have two buses of 1.066 Gbps each, one on either side of the RSPs (Route Switch Processors).

On the 7513:

CyBus 0 is slots 0 5

CyBus 1 is slots 8 12

On the 7507:

CyBus 0 is slot 0 1

CyBus 1 is slots 4 6

The 7505, of course, has only one 1.066 Gbps bus.

When loading IPs in the 7500 series routers, follow these guidelines:

  • Distribute IPs evenly across the two buses.

  • Start with high-bandwidth IPs (or Interface Processors) and VIPs (Virtual Interface Processors) if they are being used namely the CIP (Channel IP), AIP (ATM IP), FEIP (Fast-Ethernet IP), and VIPs. Balance them on the buses by putting one on one side of the RSP(s) and then one on the other.

  • Start filling the slots nearest the RSP; then continue out to the sides of the system. This matches the default slot loading process used in manufacturing.

  • Add in any other interface processors in any slot, and again balance them on the two sides.

IDBs (Interface Descriptor Blocks)

When deciding what kind of cards (IPs) and how many cards to put in a router, one more characteristic needs to be taken into consideration, namely IDBs, or Interface Descriptor Blocks. Each of the following interface types requires an IDB:

  • Virtual (tunneled, emulated LAN, and vLAN)

  • Logical

  • Sub-interfaces

  • Channel groups

Table 10-1 lists the maximum limits for the IDBs across the different IOS releases as they apply to the 7500 series routers. Other router platforms may exhibit different limits, but generally are in the 300-range limit. Please note that the 7000 series router keeps 40 IDBs to itself.

Table 10-1. IDB Limits
IOS IDB Limit
11.0 256
11.1 300
11.1CA 1024
11.2 300
12.0 1000

Hardware Buffers

In addition to IDBs, hardware allocation is a factor in determining what gets configured on the router. Hardware buffers are divided up at boot time. Based on the amount of like interface media and common MTU sizes, the appropriate hardware buffer sizes and quantities are alloted to unique buffer pools. These buffer pools are taken from MEMD or packet memory. Packet memory resides on the RSP and is typically 2 MB in size. The older RSPs and RP/SP combination in the 7000 series have 512 KB of MEMD.

This "buffer carving" takes place only on the high-end routers. Low-end routers (4x00, 3600, 2500 series) use I/O memory or shared memory for their hardware buffer allocation for the interfaces. The more interfaces you have in a router, the smaller your hardware buffer pool will be for each interface. When there is a shortage of hardware buffers, "ignores" are typically incremented on the interface because there is no place to put the incoming packet.

MEMD Buffer Carving Details

MEMD buffer carving is the portion of CBUS initialization in which MEMD buffers or packet buffers are allocated, based upon media interface bandwidths and MTUs. Buffers of a given size share a common free pool to be shared by all interfaces with closely matched MTUs. The buffer-carving algorithm is based on fair share, but it has some built-in low water marks and configurable high-water marks. The algorithm works like this:

  1. All of the interfaces' MTUs are consulted, and the interfaces are grouped into similar buffer pools. For example, a system with six Ethernets and two FDDIs would get two buffer pools; the six Ethernets would share a pool of 1500-byte buffers, whereas the two FDDIs would share a pool of 4500-byte buffers.

  2. The default receive bandwidths of all interfaces within a buffer pool are summed to form an aggregate receive bandwidth for that pool. Using the previous example, the 1500-byte Ethernet pool would be assigned 60 Mbps of aggregate receive bandwidth (610 Mbps), whereas the 4500-byte FDDI pool would get 200 Mbps (2100 Mbps).

  3. MEMD buffer space is then divided based upon proportional aggregate bandwidths. Again using the previous example, the Ethernet pool would get (60 / (60 + 200)), or 23 percent of MEMD, whereas the FDDI pool would get (200 / (60 + 200)), or 77 percent of MEMD.

  4. The number of buffers in each pool is then calculated and divided evenly among interfaces within the pool (regardless of relative bandwidth). In this example, the result is 79 Ethernet buffers ((0.23 504 KB) / 1500), and 88 FDDI buffers ((0.77 504 KB) / 4500). This gives 12 buffers per Ethernet and 44 buffers per FDDI. The per-interface buffer count is referred to as an interface's receive queue limit (RQL).

  5. Before doing the final carving, the system makes sure that there is a minimum of 16 KB of buffer space in every pool. That minimum configurable burst count worth of buffers is available to every interface within a pool, and therefore a configurable maximum buffer limit has not been exceeded for any interface.

  6. Finally, the RSP carves out the buffers as specified by the above algorithm. The actual packet memory is not touched at this time, only the buffer headers that point to packet memory.

MEMD Quantities

Two important quantities mentioned in the preceding discussion of buffer carving are maximum buffers and receive queue limit (RQL). Another important quantity is transmit queue limit (TQL). Following are brief explanations of each of these quantities:

  • Maximum buffers A 512-KB system has room for 320 Ethernet-sized MEMD buffers or 110 FDDI buffers. With the optional 2 MB MEMD buffers on the SSP or RSP, you can quadruple FDDI buffers to 440. However, you are limited to just 470 Ethernet-sized buffers on a 2 MB MEMD system due to the buffer header limitation.

  • Receive queue limit (RQL) Each interface gets configured with an RQL equal to the total buffers in its buffer pool divided by the number of interfaces sharing that pool. If that number results in less than 16 KB of buffer space, then the RQL is overridden with (16384 / MTU). Interfaces are not allowed to allocate receive buffers from their free pools when the count drops to less than one, and they decrement the count each time a buffer is removed. The RSP or SP increments the RQL every time it returns an interface's receive buffer to the free pool after the packet is transmitted, copied, or dropped.

  • Transmit Queue Limit (TQL) The transmit queue limit is the maximum number of buffers that the RSP will have outstanding on an interface's transmit queue before transmit packets are dropped. First, find the smallest buffer pool with bandwidth greater than the interface, N. Then, find the number of interfaces in the current interface's receive buffer pool, I. For each interface, TQL is calculated as N divided by the square root of I.



Performance and Fault Management
Performance and Fault Management: A Practical Guide to Effectively Managing Cisco Network Devices (Cisco Press Core Series)
ISBN: 1578701805
EAN: 2147483647
Year: 2005
Pages: 200

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net