RouterLink Load Balancing


Router/Link Load Balancing

With the drive for resilient networks, a major requirement (and, in some cases, a regulatory one) is to provide dual feeds in to or out of a network. Taking this one step further, using two different service providers to supply these feeds adds extra protection in the event of a provider network failure and increases performance if one of the networks suffers routing delays or update issues. Having a multihomed feed leads organizations to reason that if two links are being paid for, then why not use them both? From a financial aspect, this makes perfect sense, but from a network and traffic pattern aspect, it is a little more difficult to achieve.

The goal of using this form of redirection is to allow organizations to maximize their infrastructure and increase performance ”the goal of any network administrator. Another aspect that appeals to companies is to be able to intercept and steer traffic to different " next hop" routers that are best suited for that particular type of traffic, or one that is closest to the actual data but might not participate in routing updates. Typically, most companies, who have dual feeds (or even a single feed) from a service provider are not interested in participating in routing updates and maintaining routing protocols with them. In addition, service providers are not keen on having customers run routing updates between themselves and other providers. This can create loops and uncontrolled access between their networks. Being part of a BGP or OSPF area is not attractive, and most organizations will opt for a default route for outbound connections, leaving all the routing complexity to the service provider. However, with dual feeds, how can an organization ensure that traffic traverses the required links?

Certain content switch manufacturers have created specific devices to cater to this type of application, and they typically learn routes by understanding the routing algorithms and can perform some intelligent decision making regarding which link to use. Most manufacturers, however, use built-in intelligence within the switches to provide link load balancing. There are a basically two different types of link load balancing.

Router or Default Gateway Load Balancing

This method of configuration on the content switch allows companies to use both links from different suppliers regardless of the ingress path. Deploying this form of redirection is relatively simple and relies on the fact that the return path does not have to be stateful. Figure 8-12 demonstrates how router load balancing works.

Figure 8-12. Router load balancing.

graphics/08fig12.gif

The upstream routers are set up as two real servers and placed in a group . They are health checked just like any other real server, but due to the probability that they would be unlikely to run a service, ICMP health checking is typically used. However, in today's routers, HTTP can also be used, as most modern routers are configured with an HTTP daemon to allow browser-based management. The group has a load-balancing metric associated with it. This metric is typically "round robin " and ensures that every alternate packet is sent across a single provider's network. This provides great load balancing of traffic and effective use of the infrastructure. One of the problems here is that depending on latencies within the different networks the packets are traversing, they might arrive out of sequence at the destination. This is not good for application performance and network optimization. User retries and negative acknowledgments would incur additional latencies. In addition, certain service providers might run source route filtering and not allow spoofing of IP addresses. Therefore, understanding your network, its limitations, and the applications you are using will be important to determine whether this type of load balancing can be deployed. In most cases, there is little or no problem in configuring this.

Another approach is to configure the primary router interface as a real server, and this real server is backed up by a secondary real server, which is the second router interface. This will ensure that all packets traverse through the primary router and only use the secondary router in the event of a failure. Whichever method is used, the key here is that in the event of a failure, the content switch automatically detects this and routes all traffic through the active router, providing dynamic resilience.

To further aid in this method, some content manufacturers allow for multiple default gateways to be configured on the switch and load balanced. This automates this process, but has the added advantage that only one gateway can be operational if desired with the others in standby. To determine the state of the primary gateway, health checks and ARP health checking is often used, as this allows for firewalls to be configured as a default gateway but still respond to health checks without security alert being raised.

ISP, WAN, or Link Load Balancing

With this form of load balancing, it is important to understand what the end goal is. With dual feeds and a single default route, it is difficult to force traffic back through the router on which it entered the network. Using dynamic routing protocols assists with this, but with equal cost routes being a probability, a mechanism to ensure route stickiness or persistence is required. There are basically two ways to achieve this: at the MAC layer or at the Network layer.

By writing the source MAC address into the session table on the content switch, all returning packets that match that session can be redirected to the forwarding router. All that happens is that the destination MAC address on the response packet is substituted with the source MAC of the original packet. The source MAC address will be that of the upstream router and will therefore be forwarded back to the same device that sent it. Figure 8-13 demonstrates this.

Figure 8-13. MAC layer ISP, WAN, or link load balancing using MAC address substitution to achieve persistence.

graphics/08fig13.gif

It is imperative that the forwarding devices are the devices that are connected to the different networks. If not, there is no guarantee which path the packets may take. In addition, it should be pointed out that MAC layer information is not an option for sessions initiated on the internal network and using the routers for outbound sessions. It is for inbound sessions only.

For outbound sessions, Network layer information is used. The content switch acts as a proxy appliance, and on a new outbound session will write that entry into the session table and perform NAT by substituting the source IP address of the outbound session with an address associated with that of the service provider's network. This ensures that all returning packets will be forwarded via that network due to the IP address used that falls within their range. When the return packet arrives back at the content switch, the necessary address substitution can take place. This ensures that packets are sent back through the routers from which they came. Figure 8-14 illustrates how this works.

Figure 8-14. Link load balancing using NAT allows links with the least delays to be used per session.

graphics/08fig14.gif

To further maximize the dual feeds, companies may not always want to use persistence but may opt for the link with the least delay. This can happen by using advanced load balancing metrics that can test for bandwidth usage or response times. To get the maximum benefit from this form of load balancing, it makes sense to ensure that the real server, which is being health checked against for link status and response time, is a router at the other end of the network. This gives true end-to-end measurement of the infrastructure.



Optimizing Network Performance with Content Switching
Optimizing Network Performance with Content Switching: Server, Firewall and Cache Load Balancing
ISBN: 0131014684
EAN: 2147483647
Year: 2003
Pages: 85

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net