Global and Local Traffic Management


Global and local load balancers figure prominently in many production environments. Their proper use is the subject of this section. We start with a quick look at some basic configurations that show how load balancers are used in production environments. Next, we discuss using local load balancers with WebLogic Server. We end by discussing the use of global load balancers to load balance and fail over between sites.

Using Load Balancers

As global enterprises have continued to open their systems to new customer channels, system designers have been forced to deal with unpredictable user demands while retaining high performance and high availability. These requirements have driven designers to the use of global and local traffic management devices, or load balancers, to better manage wide area network (WAN) and local area network (LAN) traffic.

Figure 14.7 illustrates a simple example of local traffic management using a set of redundant local load balancers to manage traffic to a cluster of servers.

click to expand
Figure 14.7:  Local traffic management using local load balancers.

Figure 14.8 extends this example to global traffic management by adding a global load balancer in front of two identical configurations of servers and local load balancers. Similar configurations were utilized in many of the design strategies discussed in the previous section.

click to expand
Figure 14.8:  Global traffic management using global load balancers.

Many vendors offer these local and global traffic-management devices, including F5, Cisco, Rad Data Communications, and Nortel. Although they are commonly called load balancers, most of these devices offer features such as content switching, traffic management, and SSL acceleration in addition to load balancing. You should choose a product that provides at least the following features:

Intercept.     The device must be able to intercept the incoming traffic.

Inspect.     Once traffic is intercepted it must be inspected to determine its type and how it should be handled. Inspection is performed at different network layers depending on the requirements of the system. Simple inspection is performed at Layer 4 , one of the seven layers in the ISO Open Systems Interconnection (OSI) Reference Model, and involves IP and port information. For many applications this type of inspection is sufficient to route or transform the message properly. More demanding systems may require inspection of HTTP headers or even the payloads in the packets to handle the traffic properly.

Transform.     The load balancer may be required to transform the traffic in some manner, the simplest example being a change to the destination IP address and port. Advanced transformations can involve re-encryption of traffic, rewriting URL values, or even inserting cookies into HTTP headers.

Direct.    The final step involves the actual directing of the traffic to the appropriate resources.

While performing all of these tasks , load balancers must also support multiple IP-based protocols, handle high levels of traffic, and perform very quickly with little overhead. Most load balancers support multiple distribution algorithms, such as round-robin , geography, round-trip time, random, ratio, least connections, application availability, and user-defined quality-of-service (QoS). The simpler algorithms often produce better results. Most commonly used algorithms include round robin or least connections for local area networks, and user-defined QoS, geography, and application availability for wide area networks and disaster recovery.

Using Local Load Balancers with WebLogic Server

Load balancers can be used to manage traffic to both clustered and nonclustered WebLogic Server instances. Any load-balancing algorithms can be used with these configurations, although there are limitations associated with certain protocols, SSL support, and stateful HttpSession data.

When using a hardware load balancer with HTTP requests , the load balancer sits in front of the Web application and is used to distribute the load across the members of the cluster and provide failover capability. Load balancers present one IP address for all clients and then distribute load to available WebLogic Servers in the cluster.

Load balancers are also used to provide session affinity, routing user requests to the WebLogic Server instance containing the primary copy of that user s session data, a technique known as sticky sessions. Once a user establishes a session on a primary server, that user will be pinned to the same WebLogic Server instance for the entire session. As described earlier in the chapter, a failure of the server hosting the primary copy of the user session data will be handled transparently by WebLogic Server using the secondary copy of the session data replicated to another server in the cluster.

If you are using HttpSession data with a WebLogic Server cluster, you must use a load balancer that supports a compatible passive or active persistence mechanism, unless you happen to be using JDBC-based session persistence. The proper configuration for this hardware load balancer depends on the type of persistence you choose:

  • Passive cookie persistence refers to the ability of WebLogic Server to write a cookie containing session information through the load balancer to the client. The hardware load balancer must be configured to inspect the HTTP header and read the WebLogic Server cookie to route the request to the correct server instance properly.

  • Active cookie persistence exists when the load balancer either creates its own session cookie or overwrites the existing session cookie. The load balancer then examines this cookie to route the request to the proper server instance during subsequent requests. Although active cookie persistence is generally compatible with WebLogic Server, a cluster will work properly only with load balancers that do not modify the WebLogic Server session cookie.

Hardware load balancers can also be used in front of a group of managed servers that are not clustered and do not replicate HttpSession data. Traffic will be distributed to the WebLogic Servers according to the load-balancing algorithm, and the load balancer will again provide the sticky session capability. Should a server instance fail, subsequent requests will be routed to another available managed server. Unless your applications are using JDBC-based session persistence, any session data will be lost. If the session is lost, the user will have to authenticate again and he or she will lose any state the server was maintaining.

Using Global Load Balancers with WebLogic Server

Unlike local load balancers used for distributing traffic among multiple servers, global load balancers are used to distribute traffic among different sites. Global load balancers can be used with or without clustering software and are often used in conjunction with local load balancers to eliminate single points of failure and route traffic away from poorly performing sites. Global load balancers are also vital for disaster recovery; most products provide policies to ensure that all traffic will be sent to a primary site unless that site is suffering an outage. During an outage , traffic can be manually or automatically routed to a secondary site.

Most global load balancers work by becoming the authoritative DNS server, which means that when a client requests a URL, the query returns the IP address of the global load balancer itself rather than the address of a local load balancer or server. When a client contacts that IP address, the global load balancer then provides the client with the IP address of the data center best suited to serve the request. Global load balancers usually sit outside the LAN and intercept requests before they hit the firewalls at the sites themselves , although configuration options exist for balancing in firewalls as well.

Most modern global load balancers provide numerous configuration options. For example, the 3-DNS Controller from F5 Networks provides both static and global load-balancing policies with various options. In the static mode, connections are distributed according to predefined rules, such as global availability, which chooses the server based on the order defined by the administrator, and static persist, which ensures that transactions requiring persistence are always routed to same server or data center. Round-robin and return DNS policies behave like a normal DNS server, whereas random and ratio modes can be used to do weight-based load balancing. The load balancer also collects various performance metrics that can be used to define dynamic load-balancing policies.




Mastering BEA WebLogic Server. Best Practices for Building and Deploying J2EE Applications
Mastering BEA WebLogic Server: Best Practices for Building and Deploying J2EE Applications
ISBN: 047128128X
EAN: 2147483647
Year: 2003
Pages: 125

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net