Component Load Balancing

Component Load Balancing (CLB) is a feature that provides dynamic load balancing for COM+ application components. In order to enable component load balancing, an Application Center COM+ application cluster must instantiate components when requests are received from either an Application Center Web cluster or COM+ routing cluster or clients running the Win32 API. Functionally, an Application Center Web cluster and COM+ routing cluster are the same; both support CLB and can route requests to a COM+ application cluster.

NOTE


Only COM+ object activation, or instantiation, is load-balanced by CLB and queued components cannot be load balanced.

By providing a mechanism for segregating the component tier and load balancing COM+ component requests, this feature:

  • Lets you scale out the component layer independently of the Web tier.
  • Allows you to distribute the workload across tiers.

    NOTE


    Typically your performance gains are greater when you scale your Web tier than when you distribute applications on a component server tier.

  • Enables you to manage the business logic tier independently of the front-end Web tier or back-end database tier.
  • Provides an additional layer for securing applications.

    NOTE


    As is the case with NLB, there is no single point of failure because each member in a COM+ routing cluster or Web cluster functions as a router.

CLB uses a hybrid of adaptive load balancing and round-robin processing to distribute client requests across a cluster. Let's examine the different elements of CLB and their role in load balancing COM+ component requests.

CLB Architecture

The key elements of the CLB architecture are as follows:

  • The component server routing list (on the Web cluster or COM+ routing cluster)
  • The COM+ CLB service (on the Web cluster or COM+ routing cluster)
  • The CLB Tracker and Tracker objects (on the COM+ application cluster)
  • The CLB Activator

The Component Server Routing List

After you've created a Web cluster or COM+ routing cluster (the front-end members), you have to explicitly enumerate the members on the COM+ application cluster (the back-end members) that you want to handle component requests. This option, shown in Figure 5.11, is applied to the entire cluster—you cannot create a separate routing list for individual members.

TIP


You can discover the members of a COM+ application cluster by running the following commands from the command line:
  • On the routing cluster's controller, type: ac clb /listmembers
  • On the COM+ application cluster controller, type: ac cluster /listmembers

Figure 5.11 RKWebCluster Properties dialog box with a Component servers routing list

The list of component servers on the back end is stored in the metabase and registry and it is referenced by the CLB service to determine which members have to be polled in the COM+ application cluster.

The COM+ CLB Service

This Application Center service polls the COM+ application cluster members to obtain individual COM+ server response times.

NOTE


Because the COM+ CLB service runs on every member of the front-end (Web cluster or COM+ routing cluster), each member maintains its own list of component server response times. This design means that back-end member response time information isn't lost if one of the front-end members fails.

After obtaining response time information for each member that it's aware of—determined by the component server list—the COM+ CLB service organizes the list of polled members in ascending order according to their response times and writes this information to a shared memory table on the member that did the polling.

Polling

Members of either a Web cluster or COM+ routing cluster that activate components on a COM+ application routing cluster poll the COM+ cluster's members every 200 milliseconds to obtain information about their response times. A member's response time—relative to other members—provides an indicator of the load on each member. After each poll the members are placed in a table by order of increasing response time and subsequent activation requests are sent to each member in the order they appear in the table. The table of server response times is the pivotal element for distributing component requests across a CLB cluster.

The CLB Tracker and Tracker Objects

The COM+ CLB service uses two objects during polling for gathering response time information:

  • The CLB Tracker object, shown in Figure 5.12, ships with Application Center.
  • The Tracker object, shown in Figure 5.13, ships with the Windows operating system.

Figure 5.12 The CLB Tracker object and its interfaces

Application Center installs the CLB Tracker object on a computer during the set-up process. The CLB Tracker is activated only on COM+ application cluster members that are being polled by a routing cluster.

Figure 5.13 The Tracker object and its interfaces

The Tracker object is installed as part of the Windows 2000 Server set-up process and is active on all servers running Windows 2000 Server. This object's instantiation on a server provides the response time data that's used to determine which COM+ server should handle incoming client requests.

The Polling Process

The polling process for a single front-end routing server and single back-end application server consists of the following steps:

  1. The COM+ CLB service on a front-end member reads the component server routing list to determine which members have to be polled.
  2. The service calls into an instance of the CLB Tracker object running on the first member in the list.
  3. The CLB Tracker object calls into the Tracker object and gathers response time information for the target.
  4. The CLB service receives the response time information from the CLB Tracker object and stores it in memory.
  5. The CLB service moves to the next member in the routing list and repeats the preceding steps until every member in the component server list is polled.
  6. The CLB service orders the list of members (and their response times) that it's holding in memory in ascending order according to response time, and then writes this information to a shared memory table on the front-end member.
  7. In 200 milliseconds, the polling process is repeated.

Is it a heartbeat?


In a sense, the component server polling activity serves the same purpose as an NLB or Application Center cluster heartbeat.

If an instance of the CLB Tracker object can't be contacted on the target, the member's response time can't be added to the response-time table. When this table is parsed to determine where to route a COM+ request, the member doesn't exist. For all intents and purposes, the member is offline for CLB.

If the member can be polled during the next cycle, its name will reappear in the response-time table and it will be back in the load-balancing loop.

Figure 5.14 illustrates the polling process with a single front-end member and three back-end CLB cluster members. There is some overhead involved in having every member of a front-end cluster poll each of the back-end members, and this overhead, along with the time it takes requests to traverse a network, has to be taken into account when you're planning on distributing an application across tiers.

click to view at full size

Figure 5.14 The CLB server polling process

Now let's take a look at the remaining piece of CLB, the CLB Activator.

The CLB Activator

This program processes all the incoming CoCreateInstance and CoGetClassObject requests for components marked as "supports dynamic load balancing" and, after parsing the response-time table, changes the incoming RemoteServerName value to the name of the component server that should handle the request. COM+ on the routing server then forwards the request to COM+ on the selected component server, which instantiates the object and returns a response to the original client with its server address. All subsequent method calls on the object are made directly from the client to the component server for the lifetime of the object.

Let's use Figure 5.15 to demonstrate this CLB routing process. For the purpose of this example, let's assume that the server response-time table has just been updated for the front-end member and that the first request is coming in.

After the CLB Activator receives the incoming CoCreateInstance, it parses the response-time table. Server S3 has the lowest response (25 ms). Therefore, it has the lowest load and should receive the request. The CLB Activator changes the value of RemoteServerName to "S3" and passes this information to COM+, which in turn directs the request to S3. When the next CoCreateInstance request comes in, the CLB Activator implements the round-robin aspect of CLB. The CLB Activator identifies S1 as the next least-loaded cluster member and changes the RemoteServerName value to "S1". Once again, this information is passed to COM+, which sends the new request to S1.

After the CLB Activator processes the last server in the list, it moves to the top of the list and continues assigning server names in round-robin fashion. This looping through the server list to handle incoming requests continues on the front-end member until the response-time table is updated after the next polling period. With new values in place, the CLB Activator starts at the top of the server list and works through the list until the next polling update to the response-time table.

click to view at full size

Figure 5.15 The CLB routing process

NOTE


In terms of processing overhead, the greatest hit occurs during the process of identifying the appropriate host and instantiating the object on that member. After that point, client-to-member communications is direct, and is not slowed down by any intervening layers.



Microsoft Application Center 2000 Resource Kit 2001
Microsoft Application Center 2000 Resource Kit 2001
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net