Using Load Balancing to Improve Performance

Sometimes the amount of work that must be performed exceeds the capabilities of any single device available to us. In this case, the only way to increase performance is to divide the work between multiple devices. By dividing the work in such a way that many devices can tackle it, we create a much more scalable and potentially reliable solution. These benefits come at the expense of added complexity and cost. When deciding to use load balancing, you will have to weigh the need for performance against the added costs in equipment, staff, and time needed to build and maintain the system.

Load balancers use various methods to direct requests to a pool of mirrored servers. One of the simplest methods to distribute the workload is DNS round-robin. This system works by having the DNS server provide the IP address of a different server from the pool every time a DNS request is made. Although simple, this solution does not work well for high-performance systems. This is due to many factors, but the largest is the problem of client networks caching the results of the first DNS query for the server. This causes all clients on a network to send their requests to a single server in the pool. If the number of users on the network is large, this can completely undermine DNS round-robin's ability to balance the traffic.

More sophisticated solutions such as F5's Big IP Controller and Cisco's Local Director rely on a dispatch controller to distribute the requests to the pool as they arrive. These products perform two important functions. First, when a dispatcher receives a packet that is the start of a new network conversation, it must deliver the packet to a server in the pool that has the capacity to handle the new request. Second, when a packet arrives at the dispatcher that is part of an ongoing conversation, the dispatcher must have the intelligence to deliver the packet to the server that has been previously assigned to handle the request. The sophistication used to make the first decision is the major difference between the various products on the market.

Load balancing can also be used to increase availability. If one of the devices in the group breaks down, the other systems can take up the load and continue operation. If you are going to rely on this to maintain availability, keep in mind the loss of performance your solution will experience when it loses a system. You will need to make sure a minimum number of systems is always available, even if one of the systems fails. Also, if you are truly concerned about redundancy, don't forget to have a redundant load balancer. Without one, a load balancer failure will bring down the entire system.

Problems with Load Balancing

Load balancing does not improve all situations. Certain types of problems are difficult to divide among various servers. If a problem cannot be divided or if the work necessary to divide it would exceed the performance gained from distributing it, then load balancing will not help. Another problem with load balancing occurs if the handoff of a request from one server to another requires the second server to perform expensive setup operations. SSL is a classic example of this. As we discussed earlier, SSL handshakes are so expensive that the SSL servers cache session details so that new sessions do not have to go through the entire handshake. When using some load-balancing systems, it's not guaranteed that a returning client will be redirected back to the server that handled the previous request. This might actually result in performance lower than before the load-balancing solution because almost every request will be forced to make a full SSL handshake. More sophisticated load-balancing devices include the ability to redirect SSL session requests back to the original server. This is accomplished by tracking the session ID in the header of the packet and sending successive packets from a client (with the same session ID) to the same back-end server. In addition, some products support the ability to off-load all SSL functions, freeing up significant processing overhead on the web servers.

Layer 4 Dispatchers

Two major types of dispatcher products are on the market: Layer 4 dispatchers, such as the previously mentioned F5 Big IP Controller, and Layer 7 dispatchers, such as Radware's Web Server Director. The layer numbers are taken from the Open System Interconnection (OSI) reference model.

Layer 4 dispatchers make delivery decisions based on information contained within the Layer 4 (transport) header and Layer 3 (network) header of the TCP/IP packet. This information includes the source and destination IP addresses, source and destination protocol addresses (ports), and other session information, such as whether the packet is the start of a session or a continuation. Because the different pieces of information in the header of the packets are always in the same locations within the packets, Layer 4 dispatchers do not have to perform much work to locate the information on which they will make their delivery decision. This enables fast decisions and fast switching.

When a packet arrives at a Layer 4 dispatcher, the dispatcher determines whether the packet is the start of a new session or the continuation of a previously started session. If it is a new session, the dispatcher chooses a server to handle the new connection and forwards the packet to it. The way this decision is made varies depending on the load-sharing algorithms the dispatcher supports. Common algorithms include round-robin, weighted round-robin, least connections, least load, and fastest response. If the packet is a continuation of a session, the dispatcher looks up the connection details and forwards the packet on to the server handling the session.

Layer 7 Dispatchers

Layer 7 dispatchers look above the transport layer into the application data (OSI Layer 7) to make their delivery decisions. This allows them to make more intelligent decisions when delivering packets. One major advantage with Layer 7 dispatching of web servers is that different web servers in the pool can serve different types of content. Layer 4 dispatchers are unable to make decisions based on content. This means that when a Layer 4 dispatcher is used, all servers in the pool must have identical content. This is not a major issue if your site is fairly static. However, it is a major issue if your site content is changed dynamically. Keeping all your servers up to date can be a major undertaking, requiring significant network, storage, and computational resources. Having a shared file system eliminates the synchronization problems, but it introduces significant load on the servers. The servers must fetch a copy of the requested information from the common file server before the page can be returned to the client.

Content-based (Layer 7) dispatching provides an alternative to full replication or common file systems by making use of information that is contained within the HTTP request to route the packet. The dispatcher can look inside the web request to determine what URL has been requested and then use that information to choose the appropriate server.

The cost of this ability is a significant increase in the complexity and required resources for each delivery decision. Because the application data is pretty freeform and not structured into the rigid fields typified by the packet header, a substantially more expensive search must be conducted within the application data to locate the information from which the delivery decision will be made. Because of this, high-performance Layer 7 dispatchers tend to be more expensive than similarly performing Layer 4 solutions.

    Inside Network Perimeter Security
    Inside Network Perimeter Security (2nd Edition)
    ISBN: 0672327376
    EAN: 2147483647
    Year: 2005
    Pages: 230

    Similar book on Amazon © 2008-2017.
    If you may any questions please contact us: