Application Server Anatomy


The WebSphere Application Server environment consists of a number of elements, such as Deployment Managers and Node Agents that work together to deliver the function of this flexible and robust J2EE execution environment. To better appreciate how this application environment works, it helps to understand these elements, and the components from which they are formed. The components depicted in the following diagram are not the only ones present in these elements, but they are among the most significant:

click to expand

The following table summarizes these components and briefly describes the purpose of each:

Component Name

Component Description

CosNaming

Provides a CosNaming implementation and access to the WebSphere system namespace. The CosNaming NameService is accessible by default through the CORBA Interoperable Name Service (INS) bootstrap port, 2809. The CosNaming service is accessed by J2EE components through JNDI.

eWLM

Provides the workload management function that enables smart workload balancing across clustered servers.

JMX

Provides the communication backbone of the System Management function. Through this component, administrative requests that originate in the Admin application are propagated through the JMX network to those nodes and applications to which the administrative action corresponds. JMX provides access to management components called MBeans.

Admin Application

The J2EE application that provides the administrative function accessible through an Internet browser, and better known as the WebSphere Admin console.

LSD

Provides a CORBA Location Service Daemon. This function routes standard IIOP requests to the correct destination server.

Security Manager

Provides other WebSphere components with a standardized access point to the configured security environment. It typically works in conjunction with a third-party security server.

WebC

A standard J2EE web container, which hosts J2EE web modules.

EJBC

A standard J2EE EJB container, which hosts J2EE EJB modules.

HTTP Server

WebSphere includes the IBM HTTP Server. This server is useful for serving static web pages. Through use of a WebSphere plug-in, it can route requests for J2EE web components, such as servlets and JSP pages, to WebSphere application servers. WebSphere includes plug-ins for other leading web servers too, such as Apache and Netscape.

Security Server

The WebSphere Security Manager component works either with a provided registry, including native and LDAP registries, or in conjunction with an external security server, for example, Tivoli Access Manager. Please see Chapter 13 for additional discussion of security options.

JMS Server

WebSphere includes an integrated JMS server for J2EE 1.3 compliance, based on IBM's long established MQSeries.

Database Server

WebSphere app servers typically host J2EE applications that require access to your business's data assets. An app server will typically connect to any of a variety of external database servers for that purpose – for example IBM's DB2 relational database.

CosNaming Namespace

A CosNaming namespace is a type of directory in which a J2EE application server may store references to the applications and resources (such as JDBC data sources) configured to it. A client may find an application by looking it up in the directory; similarly an application may find a resource in the directory.

WebSphere features a fully compliant, yet unique, implementation of the CORBA CosNaming standard. CosNaming is required as the mechanism underlying the Java Naming and Directory Interface (JNDI), which is the J2EE API for accessing the directory.

What distinguishes the WebSphere implementation of CosNaming from the standard is that the name service, and the namespace that it represents, is distributed across all WebSphere nodes and servers within the cell. The distributed nature of this name service results in no single point of failure for the name service across the cell; the loss of any particular server does not result in a loss of the name service, or a loss of access to the namespace. Additionally, the namespace is partitioned into different segments, each possessing its own unique characteristics, which will be described in detail later in this section.

The namespace is organized to model the cell-wide topography. Each Node Agent and app server, including the Deployment Manager, houses its own unique view of the namespace. Typically, each node contains namespace entries for only those app servers that are configured on that node. Each app server's namespace contains entries for only those application and resources defined to that app server. Junctions (or links) between nodes and servers are established through use of corbanames (part of the INS standard). A corbaname provides a standardized mechanism for establishing remote linkages between naming contexts spanning physical processes. The following diagram depicts the segments in the WebSphere namespace:

click to expand

The segments from the preceding diagram are:

  • Read only
    This segment exists in each app server and Node Agent. It is built from the information contained in the WebSphere configuration repository. Corbanames are used to establish junctions between remote elements in the cell, such as app servers and Node Agents. For example, following the path /nodes/<node-name>/servers/app server 3 from inside either the Node Agent or app server 1 or 2 would follow the corbaname that addresses the app server 3 namespace. Traversing that junction would result in a remote flow from the current namespace into the namespace of app server 3. The following diagram depicts the namespace binding that establishes this linkage:

click to expand

  • Read/write persistent
    For applications that require persistent namespace bindings, there is a read/write namespace segment at both the node and cell level, which backs all writes to the file system. The Node Agent and Deployment Manager own the master copy of the node and cell persistent namespace segments respectively. Write requests to these segments are communicated to the Node Agent or Deployment Manager through JMX, then in turn, the Node Agent or Deployment Manager distributes the changes to all other Node Agents and app servers – also through JMX.

  • Read/write transient
    The namespace in each app server is read/write. During app server initialization, the app server will bind resource and EJB references into the namespace according to the configuration repository for that app server. Applications accessing the app server's namespace are free to read existing namespace bindings, or create new ones. However, any namespace changes made by the application are not permanent – they are not persisted to any backing store – and are lost when the app server is stopped.

Bootstrapping the NameSpace

To locate an application, a client must connect (or "bootstrap") to the namespace. Bootstrapping refers to the process of obtaining a JNDI initial context, then looking up and accessing J2EE components, typically EJBs. The namespace has several connection points; each is called a bootstrap address. Bootstrap addresses exist at the following points within a cell:

  • Each app server

  • Each Node Agent

  • The Deployment Manager

An application may bootstrap into any of these points in order to locate an application and thereby establish initial contact with the application itself. In a larger configuration, it may be difficult for an application to know whether a particular server or node is active at any given point in time. The basic way to insulate a client from the current operational status of any particular node/server pair is to configure the servers into clusters across two or more nodes. Bootstrap into the Deployment Manager and navigate the namespace to the cluster hosting the target application. By always keeping the Deployment Manager up, you have a reliable point of entry into the namespace. Clustering your servers increases application availability: you can lose one node and still have access the function of the application. By looking up the application through the cluster context in the namespace (refer to the Namespace Segment Characteristics diagram above) your application will gain access to the application running on of the available nodes; the junctions in the namespace automatically account for the current status of a node and ensure you get routed to an active node if there is one.

The Deployment Manager

The Deployment Manager is really just a WebSphere app server that executes the WebSphere Admin application, which itself is merely a J2EE application. The presentation elements of the Admin application are accessed through a standard Internet browser, and are known collectively as the Admin console. The Deployment Manager runs in a separate node, which may run on a separate operating system instance, or be collocated with one or more other nodes hosting app servers.

The Deployment Manager has a CosNaming namespace, accessible through bootstrap port 9809 by default. By bootstrapping an application through the Deployment Manager, an application may gain access to the services of the eWLM component. Through the eWLM component, the Deployment Manager has a view of all server clusters within the administrative domain, and can distribute work across these clustered servers.

The Deployment Manager, through the Admin application, is the center of control for managing the cell-wide WebSphere configuration. The Admin application uses JMX (Java Management Extensions), which is a message-based mechanism for coordinating administrative and operational requests across the WebSphere nodes. Each node, including the Deployment Manager, houses a JMX component. The JMX component listens on port 8880, by default, for SOAP/HTTP requests. JMX can also be configured for access through RMI/IIOP. HTTP is the default because it is easier to configure for firewall access and supports admin clients based on web services. RMI/IIOP is available for building admin clients based on the RMI programming model. RMI/IIOP supports transactional semantics and additional security protocols, such as CORBA's csiV2 – a network interoperable security standard. You can read more about web services and security topics in Chapters 8 and 13 respectively. Through JMX, the Admin application can drive requests to one, several, or all app servers. For example, the JMX flow for starting all app servers on a single node is illustrated below:

click to expand

The Admin application is able to coordinate requests through the cell-wide JMX network because the WebSphere topology is known at the Deployment Manager by way of a locally stored, cell-wide configuration repository. The Admin application owns and manages the master copy of this repository. When an administrator makes changes to the cell topology, such as adding a node or server, the master repository is updated and the changes are distributed to the other nodes and servers within the cell. Each node and server has its own local, read-only view of the repository data. This is important, because it eliminates a single point of failure by not requiring app servers to connect to the Deployment Manager to read their current configuration.

The Node Agent

The Node Agent is a special process whose primary purpose is to provide node-wide services to the app servers configured on that node. The Node Agent listens on the standard CORBA INS bootstrap port 2809. CORBA IIOP clients booting into the node go to the Node Agent by default. J2EE clients typically use JNDI to access the namespace; the WebSphere JNDI implementation functions as a CORBA IIOP client on behalf of the J2EE client. The Node Agent houses its own copy of the read-only portion of the WebSphere namespace. Navigating through the Node Agent's namespace, an application may walk the namespace into the server of its choice to find those EJB references that it requires to start its processing. These EJB references are actually CORBA Interoperable Object References (IORs) that provide the application with RMI/IIOP access to the referenced EJBs.

The Node Agent keeps track of each of the active servers on its node. The Node Agent provides a Location Service Daemon (LSD) that is able to direct RMI/IIOP requests to the correct server. The IORs from the namespace are actually indirect references to EJB home interfaces. According to the EJB specification, an application may use an EJB home interface to create or find EJBs of a particular type, where each EJB type has its own unique home interface. The first use of a home IOR is routed to the Node Agent's LSD, so that the indirection can be resolved to a specific server. This is called "location forwarding" by the CORBA standard. After the home IOR is forwarded to the correct server, the J2EE client's RMI/IIOP requests flow directly to the app server from that point on.

The following diagram depicts this flow:

click to expand

In the preceding diagram, a J2EE client's 1st step is to obtain an initial JNDI context; in the 2nd step, the client looks up an EJB home interface through that context and receives indirect IOR HomeIOR:N1:2809 <AS2 Key> in return. The location information in this IOR points back to the Node Agent (N1), but indicates through the object key (AS2 Key) that the target object resides in AS2. The client's first use of this IOR (3rd step) causes a locate request to flow to the location indicated in the IOR, which is the Node Agent on node N1. Once the locate request arrives at N1, it is passed to the LSD, which in turn, responds with forwarding IOR, HomeIOR:AS2:9810<AS2 Key>, which then enables the client ORB to interact directly with the target application server (AS2) – that's where the findByPrimaryKey() invocation will actually execute in this example.

Note

Please note, the IOR contents depicted in the drawing are for illustrative purposes only and do not reflect the actual format and content of IORs used by WebSphere.

The App Server

So far, all the WebSphere components we have looked at exist for the purpose of supporting the app server. As you would expect, the app server is the place where J2EE applications are deployed and hosted. WebSphere 5.0 supports J2EE 1.3 architecture, so naturally you will find the typical components in the app server necessary to support the programming model: EJB and web containers, Java transaction, connector, and messaging APIs are among the standard components that compose the app server. Additionally you will find various caches and pools, such as connection and thread pools. The following diagram depicts the essential components composing the app server:

click to expand

Particularly noteworthy, is the app server's local CosNaming namespace – depicted in the diagram by the box labeled "Read/Write Transient". As described previously, the CosNaming namespace is distributed across the WebSphere topology and linked together with corbanames. The app server's portion of the namespace contains bindings for those resources installed and configured on that server. Programming artifacts such as EJB homes and connection factories are found in the local namespace. By being local, the app server is freed from depending on some external server for directory access to local resources. Since the local namespace is built in memory during server initialization, it is inherently cached, boosting application performance.

The other components depicted in the preceding diagram and previously introduced are:

  • EJB Cache – this is where the EJB container stores in-use EJBs.

  • Servlet Cache – this is where the web container stores in-use servelets.

  • Connection Pool – this is where the app server stores active database connections.

  • Thread Pool – this is where the app server maintains a collection of available threads for executing work.

  • ORB – this is the CORBA Object Request Broker. It manages remote method requests on EJBs.

  • JTS – this the transaction manager (Java Transaction Service).

  • JCA – this is the Java Connector component, providing the basic runtime services required by JCA resource adapters.

  • JMS – this is the Java Message Service.

There are a number of other pools, caches, and components, which are not depicted in the preceding diagram that further serve to increase the performance and facility of the WebSphere app server. While we will not describe these other features in detail, we will touch upon the more significant ones briefly:

  • The Prepared Statement Pool holds JDBC prepared statements. Combined with pooled connections, this enables the app server to achieve break-neck data access performance.

  • The Data Cache holds CMP persistent data. Fed from an underlying database, this cache does wonders to keep CMP entity beans moving fast.

  • Dynacache is a flexible caching mechanism used primarily for caching web content. Data delivery from cache has high performance, and is a natural application for web content. Dynacache is the secret behind WebSphere's solid web performance.

  • Distributed Replication Service (DRS) is a new service in WebSphere 5.0. It is used for replicating data among clustered servers. Its initial application is for caching and synchronizing stateful data, such as that of HTTP session and stateful session beans. DRS is both a powerful and flexible facility, and is available to your applications.

Workload Management

WebSphere includes powerful workload management (WLM) capabilities to leverage clustered servers by dynamically distributing work among the members of the cluster. This delivers significant scalability and availability qualities to WebSphere applications. WLM capabilities exist for both EJB and web component application requests, and there is a separate mechanism for each. This is necessary because of the inherent differences in their respective communication protocols. EJBs, driven by RMI/IIOP requests, use the IOR architecture to deliver special WLM instructions to the WebSphere client-side ORB. Web components are driven by HTTP/HTTPS, which has no provision for delivering WLM instructions; so instead, WLM instructions are configured directly in WebSphere's web server plug-in.

EJB WLM

For RMI/IIOP, the WebSphere WLM function uses the Deployment Manager as the point of control for establishing initial WLM decisions. The Deployment Manager periodically queries the cluster members for their current operational status, so that it can maintain updated routing information for clients who wish to access EJB resources available on that cluster. Clients bootstrap into the Deployment Manager to look up WLM-enabled EJB resources.

The Deployment Manager returns specially tagged IORs for such requests. The additional information in the IOR is a WLM routing table, which includes an initial list of routing choices for use by the client-side ORB. The client-side ORB distributes EJB requests across the servers represented in the routing table. The distribution is round robin by default, but can be adjusted by specifying WLM weights to each of the servers. Servers with heavier weights receive relatively more requests than servers with lesser weights. This allows you to tune your WLM environment based on the size and/or workload mix on the servers in a cluster.

The WebSphere WLM data (routing table) is proprietary: only the WebSphere client ORB recognizes it. There is presently no Java or CORBA standard for conducting workload management. This does not prevent other ORBs from driving requests to a clustered server; it simply means no workload balancing occurs.

The following diagram depicts the run-time flows that occur in a WLM-enabled WebSphere environment:

click to expand

In the diagram, the Deployment Manager receives status information from cluster members in response to its periodic queries. Clients boot into the Deployment Manager to look up EJB resources available on that cluster and receive special WLM routing information carried in the IORs that represent those EJB resources. The client-side ORB uses the routing table information to distribute requests across the cluster.

Dynamic Routing Updates

The operational status of the cluster may change dynamically over time. As the status of the app servers changes, they register and de-register with the Deployment Manager. In addition, the Deployment Manager periodically queries the registered app servers to remain aware of their current operational status. The Deployment Manager uses this knowledge to keep the routing tables updated. It also distributes the routing tables back to the Node Agents on each of the cluster members. This enables each node in the cluster to dynamically modify a client's routing table for requests received by that node.

This is important because the routing table a client acquires by bootstrapping to the Deployment Manager may be retained and utilized by that client for an extended period of time. The possibility exists that the routing table becomes stale or old. Any cluster member can send back an updated routing table in the response messages it returns to the client. This way, the client-side ORB always has the best possible information available for workload routing decisions.

Affinity

Once the client-side ORB sends a request to a particular cluster member, an affinity to that server may be established for a period of time and must be remembered. Such affinities exist for things such as XA-transaction scope and stateful session beans. The IOR architecture is further leveraged to include affinity information, which the client-side ORB recognizes, so that requests are routed to the correct server until the affinity period terminates. For example, if the client starts an XA transaction, the client ORB will route all requests for the same entity bean back to the same server until the transaction completes.

Web Component WLM

Web components can undergo workload balancing as well. Although the HTTP protocol does not lend itself to the dynamic distribution of WLM instructions as IIOP does, the WebSphere plug-in can be configured to map a URL to multiple destinations. It then uses a round robin approach to spread requests across all of the app servers in the mapping. This enables simple, yet effective workload distribution for web requests. Cookies are used to maintain affinity knowledge so HTTP sessions may be routed back to the correct server. Basic operational awareness of the target servers is maintained by monitoring HTTP timeouts; a communication timeout removes a server from the destination list. The following diagram depicts this method of workload balancing:

click to expand

In this preceding diagram two HTTP servers, configured with a WebSphere plug-in, spread work across the same set of app servers. This configuration provides doubly-redundant routing to the same set of applications. The configuration need not be symmetric, as shown in the diagram. Through the WebSphere plug-in you can independently configure each HTTP server to route to only those servers to which you want it to route work.




Professional IBM WebSphere 5. 0 Applicationa Server
Professional IBM WebSphere 5. 0 Applicationa Server
ISBN: N/A
EAN: N/A
Year: 2001
Pages: 135

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net