Component Load Balancing Scenarios

Using CLB on a back-end cluster in combination with various front-end cluster configurations provides numerous opportunities to develop multi-tier clusters to host your various applications. There are three primary CLB models that Application Center supports:

  • Two-tier with full load balancing—A front-end Web cluster passes requests to a back-end CLB cluster.
  • Three-tier with full load balancing—A front-end Web cluster passes requests to a load-balanced middle tier that routes requests to a back-end CLB cluster.
  • Three-tier with fail over—A front-end Web cluster passes requests to two members in the middle tier (one member acts as a back-up, but doesn't server requests) that routes requests to a back-end CLB cluster.

When should you use a multi-tier load balancing topology?


Although Application Center supports multi-tier scenarios, you should not implement these scenarios simply as de facto models for distributing application components. There are good reasons for distributing applications across tiers, as well as keeping everything on a single tier. You have to fully analyze your technical and business requirements before making the split/no-split decision. There is no automatic, or right, answer to this question.

The main reasons for setting up separate Web and COM+ application server tiers:

  • Security—An additional layer of firewalls can be placed between one tier and the other.
  • Administrative partitioning—Different groups of developers and administrators are responsible for the HTML/ASP and COM+ applications. Putting the two groups on different tiers prevents problems between the groups.
  • Sharing a single COM+ application cluster among multiple Web clusters.
  • In some scenarios (for example, a low-throughput environment where each request is very expensive), sending multiple costly COM+ requests from a single Web request to multiple COM+ servers that are using a COM+ application cluster and CLB will increase response time—but not throughput.

NOTE


In a high-throughput environment, this benefit is muted, because load will be balanced evenly around the cluster whether it is multi-tier or not.

The main reasons for choosing not to set up separate Web and COM+ application server tiers include:

  • Performance—Remote access is more expensive than running locally and overall performance will degrade if a single front-end cluster is split into two clusters without adding more hardware.
  • Administrative complexity—Managing two clusters is more complex than managing one.
  • It is difficult to make full use of the hardware—You must carefully balance hardware between the Web cluster and COM+ application cluster. Adding capacity becomes more complex, and it's likely that one tier or the other will end up with less capacity. This causes a bottleneck that requires more hardware to maintain optimal headroom; in addition, more monitoring is necessary to balance hardware utilization.
  • Dependency maintenance—Whenever a member is added to the COM+ application cluster, the cluster on the front end must be updated with the new membership list of the back-end component members.

NOTE


You can use a routing cluster, but this has its own set of problems, and you have an additional tier to manage, with its inherent monitoring and throughput problems.



Two-Tier with Full Load Balancing

In the two-tier model shown in Figure 5.16, an Application Center Web cluster that is using NLB on the front end acts as a component routing cluster to component servers on the back-end cluster, which uses CLB. Out-of-process COM+ object calls are made to the CLB cluster members. The component server routing list and server response-time table reside on each front-end cluster member.

The following table (Table 5.7) summarizes the cluster settings for the two clusters in Figure 5.16.

Table 5.7 Key Cluster Settings for a Two-Tier Cluster

Setting Cluster on front tier Cluster on back tier
NLB1 Yes No
Is a router Yes No
Component installed COM object COM object
COM proxy remote server name (RSN) n/a n/a
Component is marked for load balancing Yes No

1. You can achieve this with third-party load balancing as well.

click to view at full size

Figure 5.16 Two-tier cluster model with NLB and CLB clusters

In this two-tier scenario, the front-end cluster uses NLB to distribute incoming client HTTP requests. The appropriate COM+ objects and applications on the front-end members are configured to support load balancing and the cluster is set up as a router—a role that you establish by selecting the General/Web cluster option as the cluster type when you create the cluster by using the New Cluster Wizard. This routing cluster that you create can handle both HTTP and component requests. (The back-end cluster with COM+ servers shown in Figure 5.16 is the COM+ application cluster, which is also created with the New Cluster Wizard, with COM+ application cluster selected as the cluster type).

Three-Tier with Full Load Balancing

This next model is virtually identical to the fail over model discussed in the next section. The notable difference is that full load balancing is enabled across the middle tier of routing servers. Once again, HTTP traffic is handled for the most part by the front end, and the middle cluster is dedicated to handling front-end and Win32 client requests.

Table 5.8 on the following page summarizes the cluster settings for the three clusters in Figure 5.17.

Table 5.8 Key Cluster Settings for a Three-Tier Cluster with Fail Over

Setting Cluster on front tier Cluster on middle tier Cluster on back tier
NLB Yes Yes No
Is a router Yes Yes No
Component installed COM Proxy COM Object COM Object
COM proxy RSN Middle cluster IP n/a n/a
Component is marked for load balancing No Yes No

click to view at full size

Figure 5.17 A three-tier cluster with load balancing across all three tiers

Using CLB with rich clients


If you configure the clients that are using the Win32 API to use the IP address or Web cluster name, which you have to register with a name service, NLB will load balance the instantiation requests across the Web/CLB routing cluster. The selected routing member will then dynamically re-route the instantiation request to one of the COM+ application servers by using the CLB dynamic load-balancing algorithm. The following actions occur during this process:
  1. The client process running the Win32 API issues a common client interface (CCI).
  2. OLE and the Service Control Manager (SCM) on the client running the Win32 API find the proxy and forward the CCI over a TCP connection to the Web/CLB routing cluster address.
  3. NLB on the Web/CLB routing cluster routes the TCP connection to one of the Web/CLB routing members based on the NLB load-balancing algorithm.
  4. The NLB-designated Web/CLB routing member accepts the connection and hands the CCI to its SCM.
  5. The SCM on the Web/CLB routing member determines that the request is for an object instantiation of a component marked as supporting dynamic load balancing and selects a COM+ application server based on the CLB load-balancing algorithm.
  6. The SCM on the Web/CLB routing member resends the CCI with the client address of the client running the Win32 API over a TCP connection to the selected COM+ application server.
  7. The COM+ application server accepts the TCP connection and hands the CCI to its SCM.
  8. The COM+ application server instantiates the requested component and returns its response directly to the original client running the Win32 API.
  9. All subsequent method, addref, and release requests are made over direct TCP connections between the client running the Win32 API and the COM+ application server on the CLB tier.

The trick in getting this to work is that your Win32 client proxy must be configured to make a call to the Web/CLB routing cluster (by name or IP address), rather than the name or dedicated IP address of the exporting server. There are two ways to accomplish this:

  • Open component services. Right-click My Computer, and then on the pop-up menu, click Properties. Click the Options tab, and then under Export set Application proxy RSN to the cluster name or cluster IP address. Then, export the application as an application proxy. The resulting Windows Installer package can be installed on any client running the Win32 API and will automatically point the proxy at the cluster.
  • Create a stand-alone server and install the COM+ application (or take the server off the network), and configure the server with the cluster's name and virtual IP address. Next, export the proxy on this server, and then reconfigure its IP address and name to legal values. Finally, add the server to the network.

As you can see from the preceding example cluster topologies, using CLB in conjunction with a load-balanced Web cluster provides a high degree of flexibility. The way in which you can combine these load-balancing technologies will be determined by the particular applications that you want to host.

Chapter 8, "Creating Clusters and Deploying Applications," steps through the creation of a multi-tier cluster that employs NLB and CLB. Chapter 8 also describes how COM objects are installed correctly on the Web cluster and CLB application cluster and enabled for CLB.

Three-Tier with Fail Over

In the three-tier model shown in Figure 5.18, which is a variation on the three-tier model shown in Figure 5.17, a front-end cluster of Web servers passes component requests to a load-balanced middle tier that's set up as a COM+ routing cluster. In this model, only the front end handles HTTP requests; the middle tier handles only component requests.

NOTE


This model mimics a traditional failover scenario where the failover state must provide at least as much throughput as the normal state and assumes the backup member is equivalent in power to the cluster controller. If you do not have such a requirement you should use the general three-tier model outlined in this section, because it will better utilize the routing tier processing capacity.

The middle tier consists of two members, but because the controller is the only member online for load balancing, it receives all the incoming requests. The second member acts as a standby member that can take over as the cluster controller if the current controller fails. The member is in the synchronization loop so it has all the configuration settings, such as the component server routing list, necessary for it to step into the current controller's role.

This model also supports access to the middle tier by clients running the Win32 API. COM proxy requests are sent to the cluster controller in the middle tier for processing.

The following table (Table 5.9) summarizes the cluster settings for the three clusters in Figure 5.18.

Table 5.9 Key Cluster Settings for a Three-Tier Cluster with Fail Over

Setting Cluster on front tier Cluster on middle tier Cluster on back tier
NLB Yes Yes No
Is a router Yes Yes No
Component installed COM Proxy COM Object COM Object
COM proxy RSN Middle cluster IP n/a n/a
Component is marked for load balancing No Yes No

click to view at full size

Figure 5.18 A three-tier cluster with the middle tier used for fail over

The main difference between this three-tier scenario and the two-tier scenario shown in Figure 5.16 is that COM activation calls are proxied to the member in the second tier.

NOTE


Although any COM object that is properly installed and marked for load balancing will be load-balanced in a CLB cluster, COM proxies are the exception—they are not load-balanced.



Microsoft Application Center 2000 Resource Kit 2001
Microsoft Application Center 2000 Resource Kit 2001
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net