Load Balancing with Networking Technologies


This section provides an overview of the load-balancing functionality supported by Ethernet, IP, and FC.

Ethernet Load Balancing

As discussed in chapter 5, "The OSI Physical and Data Link Layers," Ethernet supports the aggregation of multiple physical links into a single logical link. However, the load-balancing algorithm is not specified by IEEE 802.3-2002. It may be chosen by each Ethernet switch vendor. The chosen load-balancing algorithm must be able to transmit all frames associated with a conversation on a single link. A conversation is defined as a series of frames exchanged between a single pair of end nodes that the transmitting end node requires to be delivered in order. This rule ensures interoperability between switches that use different load-balancing algorithms. Because there is no way to discern a conversation from unordered frames using just the Ethernet header fields, many Ethernet switch vendors historically have employed algorithms that load-balance all traffic based on source or destination addresses. This ensures that all traffic exchanged between a given pair of end nodes traverses a single link within an Ethernet port channel. Newer techniques use fields in the protocol headers at OSI Layers 3 and 4 to identify flows. After a flow is identified, the flow identifier is used to implement flow-based load balancing within an Ethernet port channel. Flow-based algorithms improve the utilization of each link within a port channel by distributing the load more evenly across all available links.

A complementary technique exists based on the implementation of multiple VLANs on a single physical infrastructure. The Multiple Spanning Tree Protocol (MSTP) calculates an independent spanning tree in each VLAN. This enables network administrators to modify link costs independently within each VLAN. When done properly, each inter-switch link (ISL) within a shared infrastructure is utilized. For example, some VLANs might prefer ISL A to reach switch X, while other VLANs prefer ISL B to reach switch X. If ISL A fails, all VLANs use ISL B to reach switch X. Likewise, if ISL B fails, all VLANs use ISL A to reach switch X. ISLs A and B are both operational, and the total traffic load is spread across both ISLs. When this technique is not employed, all VLANs use the same ISL to reach switch X, while the other ISL remains operational but unused until the primary ISL fails.

IP Load Balancing

Each IP routing protocol defines its own rules for load balancing. Most IP routing protocols support load balancing across equal cost paths, while some support load balancing across equal and unequal cost paths. While unequal-cost load balancing is more efficient in its use of available bandwidth, most people consider unequal-cost load balancing to be more trouble than it is worth. The comparatively complex nature of unequal-cost load balancing makes configuration and troubleshooting more difficult. In practice, equal-cost load balancing is almost always preferred.

The router architecture and supported forwarding techniques also affect how traffic is load-balanced. For example, Cisco Systems routers can load balance traffic on a simple round-robin basis or on a per-destination basis. The operating mode of the router and its interfaces determines which load-balancing behavior is exhibited. When process switching is configured, each packet is forwarded based on a route table lookup. The result is round-robin load balancing when multiple equal-cost paths are available. Alternately, route table lookup information can be cached on interface cards so that only one route table lookup is required per destination IP address. Each subsequent IP packet sent to a given destination IP address is forwarded on the same path as the first packet forwarded to that IP address. The result is per-destination load balancing when multiple equal-cost paths are available. Note that the source IP address is not relevant to the forwarding decision.

Each IP routing protocol determines the cost of a path using its own metric. Thus, the "best" path from host A to host B might be different for one routing protocol versus another. Likewise, one routing protocol might determine two or more equal cost paths exists between host A and host B, while another routing protocol might determine only one best cost path exists. So, the ability to load-balance is somewhat dependent upon the choice of routing protocol. When equal-cost paths exist, administrators can configure the number of paths across which traffic is distributed for each routing protocol.

A complementary technology, called the Virtual Router Redundancy Protocol (VRRP), is defined in IETF RFC 3768. VRRP evolved from Cisco Systems' proprietary technology called Hot Standby Router Protocol (HSRP). VRRP enables a "virtual" IP address to be used as the IP address to which end nodes transmit traffic (the default gateway address). Each virtual IP address is associated with a "floating" Media Access Control (MAC) address.

VRRP implements a distributed priority mechanism that enables multiple routers to potentially take ownership of the virtual IP address and floating MAC address. The router with the highest priority owns the virtual IP address and floating MAC address. That router processes all traffic sent to the floating MAC address. If that router fails, the router with the next highest priority takes ownership of the virtual IP address and floating MAC address. VRRP can augment routing protocol load-balancing functionality by distributing end nodes across multiple routers. For example, assume that an IP subnet containing 100 hosts has two routers attached via interface A. Two VRRP addresses are configured for interface A in each router. The first router has the highest priority for the first VRRP address and the lowest priority for the second VRRP address. The second router has the highest priority for the second VRRP address and the lowest priority for the first VRRP address. The first 50 hosts are configured to use the first VRRP address as their default gateway. The other 50 hosts are configured to use the second VRRP address as their default gateway. This configuration enables half the traffic load to be forwarded by each router. If either router fails, the other router assumes ownership of the failed router's VRRP address, so none of the hosts are affected by the router failure.

The Gateway Load Balancing Protocol (GLBP) augments VRRP. GLBP is currently proprietary to Cisco Systems. Load balancing via VRRP requires two or more default gateway addresses to be configured for a single subnet. That requirement increases administrative overhead associated with Dynamic Host Configuration Protocol (DHCP) configuration and static end node addressing. Additionally, at least one IP address per router is consumed by VRRP. GLBP addresses these deficiencies by dissociating the virtual IP address from the floating MAC address. GLBP enables all routers on a subnet to simultaneously own a single virtual IP address. Each router has a floating MAC address associated with the virtual IP address. One router responds to all ARP requests associated with the virtual IP address. Each ARP reply contains a different floating MAC address. The result is that all end nodes use a single default gateway address, but the end nodes are evenly distributed across all available GLBP capable routers. When a router fails, one of the other routers takes ownership of the floating MAC address associated with the failed router.

FC Load Balancing

As discussed in chapter 5, "The OSI Physical and Data Link Layers," FC supports the aggregation of multiple physical links into a single logical link (an FC port channel). Because all FC link aggregation schemes are currently proprietary, the load-balancing algorithms are also proprietary. In FC, the load-balancing algorithm is of crucial importance because it affects in-order frame delivery. Not all FC switch vendors support link aggregation. Each of the FC switch vendors that support link aggregation currently implements one or more load-balancing algorithms. Cisco Systems offers two algorithms. The default algorithm uses the source Fibre Channel Address Identifier (FCID), destination FCID, and Originator Exchange ID (OX_ID) to achieve load balancing at the granularity of an I/O operation. This algorithm ensures that all frames within a sequence and all sequences within an exchange are delivered in order across any distance. This algorithm also improves link utilization within each port channel. However, this algorithm does not guarantee that exchanges will be delivered in order. The second algorithm uses only the source FCID and destination FCID to ensure that all exchanges are delivered in order.

As previously stated, load balancing via Fabric Shortest Path First (FSPF) is currently accomplished in a proprietary manner. So, each FC switch vendor implements FSPF load balancing differently. FC switches produced by Cisco Systems support equal-cost load balancing across 16 paths simultaneously. Each path can be a single ISL or multiple ISLs aggregated into a logical ISL. When multiple equal-cost paths are available, FC switches produced by Cisco Systems can be configured to perform load balancing based on the source FCID and destination FCID or the source FCID, destination FCID, and OX_ID.

Similar to Ethernet, FC supports independent configuration of FSPF link costs in each Virtual Storage Area Network (VSAN). This enables FC-SAN administrators to optimize ISL bandwidth utilization. The same design principles that apply to Ethernet also apply to FC when using this technique.




Storage Networking Protocol Fundamentals
Storage Networking Protocol Fundamentals (Vol 2)
ISBN: 1587051605
EAN: 2147483647
Year: 2007
Pages: 196
Authors: James Long

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net