6.2 Deployment Topologies

 <  Day Day Up  >  

Partitioning a J2EE environment into four tiers as depicted in Figure 6.2 on page 189 helps define the security requirements for the environment and the design of the topology to host the components .

6.2.1 Entry Level

A simple topology in an enterprise environment consists of a DMZ where the Web server is deployed (see Figure 2.8 on page 31). Figure 6.3 shows how to build a DMZ in a J2EE environment. All requests from the Internet get handled within the DMZ, and only authorized requests are allowed to enter into the intranet. Based on the incoming request, the Web server routes the request to a WAS deployed behind the firewall. WASs behind the firewall host both the presentation layer and the business-logic layer. After successfully handling the request, the WAS sends a response back to the end user through the DMZ. This architecture is known as entry-level topology .

Figure 6.3. Entry-Level Topology

graphics/06fig03.gif

In the entry-level topology, the security of the environment is enforced by a combination of appropriate DMZ partitioning, firewalls, and configuration of security policies at the Web server and the WASs. In particular, in an entry-level topology, in which typically only one Web server handles all the incoming requests, the firewall policies can be configured in addition to the Web server and WAS security policies to restrict access to resources on the Web server. The firewall policies can be set up such that the firewall locks down a port, which can be used to connect to the systems within the intranet.

6.2.2 Clustered Environment

In a clustered environment with multiple Web servers and with WASs deployed to handle requests to an enterprise, security requirements tend to be different. It may no longer be simple to lock down a port and host for a Web server or rely only on the firewall functionality to restrict access to WASs and transaction servers, as in the entry-level topology. Given that requests need to be load balanced across a set of servers within a cluster, care should be taken to make sure that when a Web server receives a secure request from a particular client, the secure session that the server establishes with the client can then be reused for subsequent requests.

A clustered environment contains multiple Web servers, WASs, and, possibly, multiple replicated directory and security servers. When a request arrives, it is typically handled by a load-balancing edge server.

In addition to load balancing the incoming requests, the edge server can filter the requests, based on the security requirements for the resources being accessed, acting as a secure reverse proxy server . Secure reverse proxy servers provide coarse-grained access control to the resource, in addition to the fine-grained control enforced by the downstream Web servers or WASs. Given the load-balancing characteristics of such an environment, WASs typically share some session state so that requests from a given user within a given period of time can be handled by any of the deployed WASs. Figure 6.4 shows a clustered environment topology in which the edge server also acts as a secure reverse proxy server.

Figure 6.4. Clustered Environment with Secure Reverse Proxy Server

graphics/06fig04.gif

In the case of intranet applications, clients can issue requests directly to the WASs. Some of these clients , known as fat clients , are typically nonbrowser applications, such as Java applications and applets, and C++ programs, as shown in Figure 6.4. An intranet application is sometimes accessible over a protocol specific to the application. For example, an enterprise bean can be accessed from IIOP clients within the intranet, whereas the same enterprise bean is accessed via IIOP from a front-ending servlet invoked by HTTP clients. Even in these cases, in which J2EE applications are accessed from within an intranet, the business-logic components need to be protected; the servers enforce access control by authenticating users over the application-specific protocol, such as IIOP, and enforcing the authorization policies associated with those applications.

6.2.3 Adding Another Level of Defense

A widely adopted deployment topology is to use a portal server as a presentation-layer front end to business applications. Depending on the sensitivity of the data in the back end and the corporate security policies, additional security constraints may be desired. In such cases, portal servers and WASs can be separated by a firewall that restricts the requests that can reach the business applications. As depicted in Figure 6.5, such a firewall is placed between the portal servers and the WASs that host the business logic in the form of enterprise beans.

Figure 6.5. Clustered-Environment Topology with Additional Level of Defense

graphics/06fig05.gif

In order for a consistent security policy to be enforced, it is likely that the portal server and the WASs share the same set of security and directory servers. Given the desire not to have the security and directory servers in the DMZ, it is possible for the security and directory servers used in the DMZ to be a subset of the enterprise directory. In Figure 6.5, such servers are placed in a third partitioned zone, which hosts the business-logic and data systems.

6.2.4 Defending with a Secure Caching Reverse Proxy Server

It is possible to introduce a reverse proxy server at the edge of the network to provide high-performance caching capabilities. This caching reverse proxy server can enforce security policies that are consistent with the WASs' security policies. In this case, the caching reverse proxy server includes a security plug-in (see Figure 6.6) or is coupled with a separate security server. This coupling helps handle requests securely within a DMZ such that if a request ends up being cached in the caching reverse proxy server, a response can be sent right from the cache without compromising the security of the environment.

Figure 6.6. Clustered Environment with a Caching Reverse Proxy Server and a Security Plug-in

graphics/06fig06.gif

Access to a system or a legacy application usually requires that the user identity be passed to downstream requests. Often, this identity needs to be the same as the user accessing the presentation- or business-logic layers . In some cases, the identity needs to be mapped to a different value. For example, some business application environments require that database access control be based on the end user accessing the presentation logic. In such cases, the credentials associated with the user can be securely passed to the back-end server. An example of such a credential is a Kerberos ticket.

In many scenarios, millions of users have access to a portal server. If enterprise-application access is required, the number of users known to the legacy application may be far smaller. In such scenarios, back-end applications need to establish a trust relationship with the WASs so that the end users' identities are filtered as the end users' requests traverse from the Internet to the back-end systems. At any such boundary ”where the users known to a target server are different from or limited compared to those for a front-end WAS ”some form of identity mapping needs to take place so that an end user's credential, or identity, can be mapped to a different one known and trusted by the target application.

Where J2EE servers host the business logic to access legacy applications or databases, translation of an identity accessing the business logic to another identity meaningful to legacy applications or data systems is handled by a JCA connection manager (see Section 3.12 on page 97). The JCA connection manager constitutes an additional layer, known in this topology as the connector layer (see Figure 6.7).

Figure 6.7. Clustered Environment with a Connector Layer

graphics/06fig07.gif

The mapping of an identity in a business-logic layer when accessing a data system depends on the target environment and the policies associated at the connector layer. The relation that dictates the mapping of identities at the business-logic layer to identities known to back-end systems can be a many-to-one relationship. For example, all the requests from an enterprise bean to a database may be performed under a single identity, say db2client , that is known to the database. Alternatively, in a many-to-some relationship, the identity used to access a back-end system will be based on some group membership. For example, all employess of a human resources department will be mapped to the hr_emp identity, which will be used to access the database, and users who have managerial privileges will be mapped to the manager_user identity. In some cases, an end-user identity needs to be conveyed to the back-end system, and in such cases, there will be a one-to-one mapping. For example, user Bob accessing a Web application will be mapped to his identity, bob1 , on the database system when the application performs database access.

 <  Day Day Up  >  


Enterprise Java Security. Building Secure J2EE Applications
Enterprise Javaв„ў Security: Building Secure J2EEв„ў Applications
ISBN: 0321118898
EAN: 2147483647
Year: 2004
Pages: 164

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net