Managing a Network of Application Servers


Managing a network of application servers is largely the same as managing an individual server. The admin console is almost identical – the primary difference being the addition of clustering task items, and other minor differences that are unique to a networked environment.

Creating a Distributed Network

To create a network of application servers all managed from a common Deployment Manager you have to first install the deployment manager on a particular computer, and then install one or more WebSphere Application Servers. You can install one of these application server images on the same computer as the deployment manager, or you can dedicate the deployment manager computer to only performing deployment. If you install the deployment manager and an application server on the same computer, you will not be able to start both the deployment manager and application server at the same time unless you change the listening ports of one or the other, until you federate the application server to the cell through the addnode command (see below). You can install the application server and the deployment manager in either order – you only have to install the deployment manager before you can form a managed network.

Likewise, you can run any of the application servers in a standalone configuration, including installing applications and setting the configuration for the application server for as long as you want and still federate it into a managed network later.

For each computer on which you have installed an application server that you want to include in the network managed by the deployment manager you have to register that node with the deployment manager. You do this by invoking the addnode command on the computer where your application server is installed.

Important

You should ensure the deployment manager is running before issuing the command.

The syntax of the addnode command is:

addNode cell_host [cell_port] [-conntype <type>] [-includeapps]                   [-startingport <portnumber>] [-noagent] [-quiet] [-nowait]                    [-logfile <filename>] [-replacelog] [-trace] [-username <uid>]                    [-password <pwd>] [-help]

The only required parameter is the host name or IP address of the computer on which your deployment manager is installed. By default, the addnode command will attempt to contact the deployment manager at the standard port number for the deployment manager – 8879. If you reconfigure the deployment manager to listen at a different port, you will have to specify that port number in this command.

By default, this command will use the SOAP connector to communicate with the deployment manager. If you want it to use another connector then you will have to specify that with the –conntype parameter.

If you have been using the application server(s) on this computer and if you want to continue to support those applications on the application servers after they've been federated into the cell you need to use the -includeapps parameter. This will force the application configuration to be synchronized to the deployment manager as part of the registration process.

If security has been enabled on the deployment manager you will have to specify a user ID and password representing an authentic administrator with authority to change the configuration of the system using the –username and –password parameters.

If you specify the –noagent switch, the node agent will not be automatically started at the completion of this command.

click to expand

After issuing this command, the node agent will be started, the node will be registered with the deployment manager, and the configuration will be synchronized between the central repository of the deployment manager and the local repository at the node.

If the admin console application had been installed at the application before being registered into the cell, that application will be deconfigured at the application server, and re-configured to be hosted at the deployment manager.

Note that the addnode process does not merge any of the configuration documents in the cell directory of the application server's configuration repository. This includes

  • filter.policy – contains the Java 2 Security permissions that an application cannot be granted, even if requested in the app.policy or was.policy files

  • integral-jms-permissions.xml – grants method permissions to users of JMS queues and topics for the integral JMS provider

  • namestore.xml – maintains any cell-level name bindings created from the application server

  • pmirm.xml – the PMI request metric filters established for the cell

  • resources.xml – configuration information for any resources created/modified for the application sever

  • variables.xml – environment variables created for the application server

  • virtualhosts.xml – the virtual hosts definitions used for applications hosted in the application server

Generally, you should re-create any of the information that is contained in these documents through the Admin UI or command-line interfaces of the deployment manager. If you're feeling particularly brave, you can hand edit these in the cell configuration repository under the deployment manager, but be sure to back up your original copies in case you mess them up.

Creating a Cluster from an Existing Server

The biggest reason for forming a cell is to enable central administration of multiple application servers on different computers. One benefit of this is being able to form clusters – multiple application server instances that collectively host the same application. You can create a new cluster composed entirely of new application server instances, or form a cluster from an existing server – adding more application server instances to increase the capacity of the cluster.

To create a cluster, press the New operation on the Clusters page. This will present the new cluster wizard:

click to expand

You can then enter the name of the cluster. The Prefer local option instructs the workload manager to select the clustered bean instance that is on the same server process when that is possible.

The Internal replication domain refers to the configuration of the HTTP session replication service. If you want to form a replication group that mirrors the cluster topology, then enable this option and the replication group will be formed automatically to match the cluster configuration.

If you want to form a cluster from an existing application server then select the Existing server option, and specify the name of the server to use.

The weighting value is used by the workload manager to distribute workload proportional to the various weighting values that you give to each application server instance in the cluster.

You can define additional (new) application server instances to include in the cluster in Step 2.

click to expand

Fill in the application server name, and press the Apply button for each application server instance that you want to add. If you have registered more than one node into the cell, you can select different nodes for each application server instance in the cluster.

You can then review the cluster creation request and press the Finish button to create the configuration change. Remember, as usual, you will need to save your configuration change to the repository.

Once the cluster definition has been created, you can start the cluster by selecting the cluster and pressing the Start button.

click to expand

That action will start the cluster and all of the application servers in the cluster (on whatever node the application servers have been defined on). Unlike starting multiple servers on the Application Servers page, all of the servers in the cluster are started in parallel when you start them as a cluster. If you want to start the application servers individually, you can do so from the Application Servers page. You don't have to start every server in the cluster – the workload manager will distribute work over the set of application servers that are available and running at any given time. You can also cause the servers to be started one at a time by pressing the Ripplestart button on the cluster page.

If you start the cluster, the actual initiation of the application servers in the cluster will be performed in the background. You can verify the state of the application servers as they are starting through the Application Servers page. You may have to press the Refresh button to get updated information on the status of these servers.

When you form a cluster out of an existing application server, any applications that were configured to the server are promoted up to being configured to the cluster. You can see this by selecting an application and checking which servers its modules are associated with. In fact, when you install new applications after forming the cluster, the original application server is no longer a server that you can associate with the application – you can only select the cluster (and any other servers that are not members of a cluster).

You can create several clusters, according to your topology needs.

Adding and Removing Servers in the Cluster

You can go back later and add more or remove application servers from the cluster by selecting the cluster definition and then selecting the Cluster Members link:

click to expand

From the cluster members page you can add new application servers by pressing the New button.

You can also remove an application server from the cluster by selecting it and pressing the Delete button. The application server you are deleting must be stopped, which you can do from the same cluster members page.

click to expand

Any applications that are associated with the cluster, including those that were introduced to the cluster as the result of forming the cluster from an existing server continue to be associated with the other servers in the cluster – even if you delete the original application server; the one from which you formed the cluster.

As an aside, you cannot delete the cluster itself if any applications are configured to the cluster – or to any of the application servers in the cluster.

Rippling a Cluster

Once you've created a cluster and installed your applications and have it all running in a production environment, you will likely want to keep the cluster up and running for as long as possible – especially if you operate an around-the-clock global Internet web site. However, there may be advantages to restarting application servers on occasion – for example, if your application leaks a little bit of memory, over time your application server will run out of available resources and stop working. Clusters are enormously resilient. You can stop individual application servers in the cluster and the other servers in the cluster automatically take over whatever workload was being processed by the lost server (assuming they have some amount of additional capacity).

It is a good idea to occasionally take an application server down and then restart it (without taking down the rest of the cluster). Doing so allows the application server to release and clean up any resources that may have been consumed by the server or the applications in the server over a period of time. We call taking one server down and restarting it, followed by the next, and the next through the entire set of application servers rippling the cluster. Since only one application server is down at a time, the rest of the cluster continues to operate and service requests from your clients – the outage of individual servers won't be noticeable to them.

WebSphere provides an operation for rippling the cluster. You can drive this operation manually from the clusters page by selecting the cluster and pressing the Ripplestart button.

click to expand

It is often a good idea to build an admin script that ripples all of your clusters periodically – more or less frequently depending on the stability of your applications.




Professional IBM WebSphere 5. 0 Applicationa Server
Professional IBM WebSphere 5. 0 Applicationa Server
ISBN: N/A
EAN: N/A
Year: 2001
Pages: 135

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net