Business Tier Security Patterns


Audit Interceptor

Problem

You want to intercept and audit requests and responses to and from the Business tier.

Auditing is an essential part of any security design. Most enterprise applications have security-audit requirements. A security audit allows auditors to reconcile actions or events that have taken place in the application with the policies that govern those actions. In this manner, the audit log serves as a record of events for the application. This record can then be used for forensic purposes following a security breach.

That record must be checked periodically to ensure that the actions that users have taken are in accordance with the actions allowed by their roles. Deviations must be noted from audit reports, and corrective actions must be taken to ensure those deviations do not happen in the future, either through code fixes or policy changes. The most important part of this procedure is recording the audit trail and making sure that the audit trail helps proper auditing of appropriate events and user actions associated. These events and actions are often not completely understood or defined prior to construction of the application. Therefore, it is essential that an auditing framework is able to easily support additions or changes to the auditing events.

Forces
  • You want centralized and declarative auditing of service requests and responses.

  • You want auditing of services decoupled from the applications themselves.

  • You want pre- and post-process audit handling of service requests, response errors, and exceptions

Solution

Use an Audit Interceptor to centralize auditing functionality and define audit events declaratively, independent of the Business tier services.

An Audit Interceptor intercepts Business tier requests and responses. It creates audit events based on the information in a request and response using declarative mechanisms defined externally to the application. By centralizing auditing functionality, the burden of implementing it is removed from the back-end business component developers. Therefore, there is reduced code replication and increased code reuse.

A declarative approach to auditing is crucial to maintainability of the application. Seldom are all the auditing requirements correctly defined prior to implementation. Only through iterations of auditing reviews are all of the correct events captured and the extraneous events discarded. Additionally, auditing requirements often change as corporate and industry policies evolve. To keep up with these changes and avoid code maintainability problems, it is necessary to define audit events in a declarative manner that does not require recompilation or redeployment of the application. Since the Audit Interceptor is the centralized point for auditing, any required programmatic change is isolated to one area of the code, which increases code maintainability.

Structure

Figure 10-1 depicts the class diagram for the Audit Interceptor pattern. The Client attempts to access the Target. The AuditInterceptor class intercepts the request and uses the AuditEventCatalog to determine if an audit event should be written to the AuditLog.

Figure 10-1. Audit Interceptor class diagram


Figure 10-2 shows the sequence of events for the Audit Interceptor pattern. The Client attempts to access the Target, not knowing that the Audit Interceptor is an intermediary in the request. This approach allows clients to access services in the typical manner without introducing new APIs or interfaces specific to auditing that the client would otherwise not care about.

Figure 10-2. Audit Interceptor sequence diagram


The diagram in Figure 10-2 does not reflect the implementation of how the request is intercepted, but simply illustrates that the AuditInterceptor receives the request and then forwards it to the Target.

Participants and Responsibilities

Client. A client sends a request to the Target.

AuditInterceptor. The AuditInterceptor intercepts the request. It encapsulates the details of auditing the request.

EventCatalog. The EventCatalog maintains a mapping of requests to audit events. It hides the details of managing the life cycle of a catalog from an external source.

AuditLog. AuditLog is responsible for writing audit events to a destination. This could be a database table, flat file, JMS queue, or any other persistent store.

Target. The Target is any Business-tier component that would be accessed by a client. Typically, this is a business object or other component that sits behind a SessionFaçade, but not the SessionFaçade itself, because it would mostly be the entry point that invokes the AuditInterceptor.

The Audit Interceptor pattern is illustrated in the following steps (see Figure 10-2):

1.

Client attempts to access Target resource.

2.

AuditInterceptor intercepts request and uses EventCatalog to determine which, if any, audit event to generate and log.

3.

AuditInterceptor uses AuditLog to log audit event.

4.

AuditInterceptor forwards request to Target resource.

5.

AuditInterceptor uses EventCatalog to determine if the request response or any exceptions raised should generate an audit event.

6.

AuditInterceptor uses AuditLog to log generated audit event.

Strategies

The Audit Interceptor pattern provides a flexible, unobtrusive approach to auditing Business tier events. It offers developers an easy-to-use approach to capturing audit eventsby decoupling auditing from the business flow. This allows business developers to disregard auditing and defer the onus to the security developers, who then only deal with auditing in a centralized location. Auditing can easily be retrofitted into an application using this pattern. By making use of an Event Catalog, the Audit Interceptor becomes decoupled from the actual audit events and therefore can incorporate changes in auditing requirements via a configuration file. The following is a strategy for implementing the Audit Interceptor.

Intercepting Session Façade Strategy

The Audit Interceptor requires that it be inserted into the message flow to intercept requests. The Intercepting Session Façade strategy designates the Session Façade as the point of interception for the Intercepting Auditor. The Session Façade receives the request and then invokes the Audit Interceptor at the beginning of the request and again at the end of the request. Figure 10-3 depicts the class diagram for the Secure Service Façade Interceptor Strategy.

Figure 10-3. Secure Service Façade Interceptor strategy class diagram


Using a Secure Service Façade Interceptor strategy, developers can audit at the entry and exit points to the Business tier. The SecureServiceFaçade is the appropriate point for audit interception, because its job is to forward to the Application Services and Business Objects. Typically, a request consists of several Business Objects or Application Services, though only one audit event is required for that request. For example, a credit card verification service may consist of one Secure Service Façade that invokes several Business Objects that make up that service, such as an expiration date check, a LUN10 check, and a card type check. It is unlikely that each individual check generates an audit event; it is likely that only the verification service itself generates the event.

In Figure 10-3, the SecureServiceFaçade is the entry to the Business tier. It provides the remote interface that the Client uses to access the target component, such as another EJB or a Business Object. Instead of forwarding directly to the target component, the SecureServiceFaçade first invokes AuditInterceptor. The AuditInterceptor then consults the EventCatalog to determine whether to generate an audit event and, if so, what audit event to generate. If an audit event is generated, the AuditLog is then used to persist the audit event. Afterward, the SecureServiceFaçade then forwards the request as usual to the Target. On the return of invocation of the Target, the SecureServiceFaçade again calls the AuditInterceptor. This allows auditing of both start and end events. Exceptions raised from the invocation of the Target also cause the SecureServiceFaçade to invoke the AuditInterceptor. More often than not, you want to generate audit events for exceptions.

Figure 10-4 depicts the Secure Service Façade Interceptor strategy sequence diagram.

Figure 10-4. Secure Service Façade Interceptor strategy sequence diagram


Consequences

Auditing is one of the key requirements for mission-critical applications. Auditing provides a trail of recorded events that can tie back to a Principal. The Audit Interceptor provides a mechanism to audit Business-tier events so that operations staff and security auditors can go back and examine the audit trail and look for all forms of application-layer attacks. The Audit Interceptor itself does not prevent an attack, but it does provide the ability to capture the events of the attack so that they can later be analyzed. Such an analysis can help prevent future attacks.

The Audit Interceptor pattern has the following consequences for developers:

  • Centralized, declarative auditing of service requests. The Audit Interceptor centralizes the auditing code within the application. This promotes reuse and maintainability.

  • Pre- and post-process audit handling of service requests. The Audit Interceptor enables developers to record audit events prior to a method call or after a method call. This is important when considering the business requirements. Auditing is often required prior to the service or method call as a form of recording an "attempt." In other cases, an audit event is required only after the outcome of the call has been decided. And finally, there are cases where an audit event is needed in the event of an exception with the call.

  • Auditing of services decoupled from the services themselves. The Audit Interceptor pattern decouples the business logic code from the auditing code. Business developers should not have to consider auditing requirements or implement code to support auditing. By using the Audit Interceptor, auditing can be achieved without impacting business developers.

  • Supports evolving requirements and increases maintainability. The Audit Interceptor supports evolving auditing requirements by decoupling the events that need to be audited from the implementation. An audit catalog can be created that defines audit events declaratively, thus allowing different event types for different circumstances to be added without changing code. This improves the overall maintainability of the code by reducing the number of changes to it.

  • Reduces performance. The cost of using an interceptor pattern is that performance is reduced anytime the interceptor is invoked. Every time that Audit Interceptor determines that a request or response does not require generation of an audit event, it unnecessarily decreases performance.

Sample Code

Example 10-1 is sample source code for the AuditRequestMessageBean class. This class is used subsequent to the AuditLog class placing audit events onto the JMS queue and is responsible for pulling audit messages off a JMS queue and writing them to a database using an AuditLogJdbcDAO class (not shown here). It is not reflected in the previous diagrams.

Example 10-1. AuditRequestMessageBean.java: AuditLog
package com.csp.audit; import javax.jms.*; /**  * @ejb.bean transaction-type="Container"  *           acknowledge-mode="Auto-acknowledge"  *           destination-type="javax.jms.Queue"  *           subscription-durability="NonDurable"  *           name="AuditRequestMessageBean"  *           display-name="Audit Request Message Bean"  *           jndi-name=  *      "com.csp.audit.AuditRequestMessageBean"  *  * @ejb:transaction type="NotSupported"  *  * @message-driven  *      destination-jndi-name="Audit_Request_Queue"  *      connection-factory-jndi-name="Audit_JMS_Factory"  */ public class AuditRequestMessageBean       extends MessageDrivenBeanAdapter {    public void onMessage(Message msg) throws Exception {       ObjectMessage objMsg = (ObjectMessage)msg;      try {          String message = (String)objMsg.getObject();          JdbcDAOBase dao = (JdbcDAOBase)             JdbcDAOFactory.getJdbcDAO(                "com.csp.audit.AuditLogJdbcDAO");          // The DAO is responsible for actually writing the          // audit message in the database using the JDBC API.          dao.executeUpdate(dto);       }       catch(Exception ex) {          System.out.println("Audit event write failed: "      + ex, ex);       }    } // Other EJB Methods for MessageDrivenBean interface         public void ejbCreate() {         System.out.println("ejbCreate called");      }      public void ejbRemove() {         System.out.println("ejbRemove called");      }      public void setMessageDrivenContext(MessageDrivenContext context) {         System.out.println("setMessageDrivenContext called");         this.context = context;      } }

Example 10-2 lists the sample source code for the AuditClient class, which is responsible for placing audit event messages on a JMS queue for persisting later. This class is used by the AuditLog class.

Example 10-2. AuditClient.java: Helper class used by AuditInterceptor
package com.csp.audit; import javax.naming.*; import javax.jms.*; public class AuditClient {    private static String JMS_FACTORY_NAME                                = "Audit_JMS_Factory";    private static String AUDIT_QUEUE_NAME                              = "Audit_Request_Queue";    private static QueueSender queueSender = null;    private static ObjectMessage objectMessage = null;   // Initialize the JMS Client   //  1. Lookup JMS connection factory   //  2. Create a JMS connection   //  3. Create a JMS session object   //  4. Lookup a JMS Queue and Create a JMS sender   synchronized static void init() throws Exception {       Context ctx = new InitialContext();       QueueConnectionFactory cfactory =          (QueueConnectionFactory) ctx.lookup(              JMS_FACTORY_NAME);        QueueConnection queueConnection = (QueueConnection)           cfactory.createQueueConnection();       QueueSession queueSession = (QueueSession)          queueConnection.createQueueSession(             false, javax.jms.Session.AUTO_ACKNOWLEDGE);       Queue queue = (Queue)ctx.lookup(AUDIT_QUEUE_NAME);       queueSender = queueSession.createSender(queue);       objectMessage = queueSession.createObjectMessage();    }   // 5. Send the audit message to the Queue    public static void audit(String auditMessage)    throws Exception{       try {          if(queueSender == null || objectMessage == null){             init();             objectMessage.setObject(auditMessage);             queueSender.send(objectMessage);             return;          }          objectMessage.setObject(auditMessage);          queueSender.send(objectMessage);       }       catch(Exception ex) {          System.out.println("Error sending audit event: "                             + ex, ex);          throw ex;       }    } }

Security Factors and Risks

The Audit Interceptor pattern provides developers with a standard way of capturing and auditing events in a decoupled manner. Auditing is an essential part of any security architecture. Audit events enable administrators to capture key events that they can later use to reconstruct who did what and when in the system. This is useful in cases of a system crash or in tracking down an intruder if the system is compromised.

Business Tier

Auditing. The Audit Interceptor pattern is responsible for providing a mechanism to capture audit events using an Interceptor approach. It is independent of where the audit information gets stored or how it is retrieved. Therefore, it is necessary to understand the general issues relating to auditing. Typically, audit logs (whether flat files or databases) should be stored separately from the applications, preferably on another machine or even off-site. This prevents intruders from covering their tracks by doctoring or erasing the audit logs. Audit logs should be writable but not updateable, depending on the implementation.

Distributed Security

JMS. The Audit Interceptor pattern is responsible for auditing potentially hundreds or even thousands of events per second in high-throughput systems. In these cases, a scalable solution must be designed to accommodate the high volume of messages. Such a solution would involve dumping the messages onto a persistent JMS queue for asynchronous persistence. In this case, the JMS queue itself must be secured. This can be done by using a JMS product that supports message-level encryption or using some of the other strategies for securing JMS described in Chapter 5. Since the queue must be persistent, you will also need to find a product that supports a secure backing store.

Reality Check

What is the performance cost? The Audit Interceptor adds additional method calls and checks to the request. Using a JMS queue to asynchronously write the events reduces the impact to the end user by allowing the request to complete before the data is actually persisted. The trade-off would be to insert auditing code only where it is required. But anticipating that requirements will change, and a lot of areas that require auditing, the benefits of decoupling and reduced maintenance outweigh the slight performance degradation.

Why not use Aspect Oriented Programming (AOP) techniques instead? AOP provides a new technique that reduces code complexity by consolidating code such as auditing, logging, and other functions that are spread across a variety of methods. It does this by inserting the (aspect) code into the methods either during the build process or through post-compile bytecode insertion. This makes it very useful when you require method-level auditing. The Audit Interceptor allows you to do service-level auditing. It can be as fine-grained as your Service Façade or other client allows, though usually not as fine-grained as AOP allows. The drawback to AOP is that it requires a third-party product and may introduce slight performance penalties, depending on the implementation.

Is auditing essential? In most cases, the answer is yes. It's essentialnot just for record-keeping, but for forensic analyses purposes as well. You may not be able to detectand most likely cannot diagnosean attack if you do not maintain an audit log of events. The audit log can be used to detect brute-force password attacks, denial of service attacks, and many others.

Related Patterns

Intercepting Filter [CJP2]. The Audit Interceptor pattern is similar to the Intercepting Filter but is not as complex and is better suited for asynchronous writes.

Pipes and Filters [POSA1]. The Audit Interceptor pattern is closely related to the Pipes and Filters pattern.

Message Interceptor Gateway. It is often necessary to audit on the Web Services tier as well as the Business tier. In such cases, the Message Interceptor Gateway should employ the Audit Interceptor pattern.

Container Managed Security

Problem

You need a simple, standard way to enforce authentication and authorization in your J2EE applications and don't want to reinvent the wheel or write home-grown security code.

Using a Container Managed Security pattern, the container performs user authentication and authorization without requiring the developer to hard-wire security policies in the application code. It employs declarative security that requires the developer to only define roles at a desired level of granularity through deployment descriptors of the J2EE resources. The administrator or deployer then uses the container-provided tool to map the roles to the users and groups available in the realm at the time of deployment. A realm is a database of users and their profiles that includes at least usernames and passwords, but can also include role, group, and other pertinent attributes. The actual enforcement of authentication and authorization at runtime is handled by the container in which the application is deployed and is driven by the deployment descriptors. Most containers provide authentication mechanisms by configuring user realms for LDAP, RDBMS, UNIX, and Windows.

Declarative security can be supplemented by programmatic security in the application code that uses J2EE APIs to determine user identity and role membership and thereby enforce enhanced security. In cases where an application chooses not to use a J2EE container, configurable implementation of security similar to Container Managed Security can still be designed by using JAAS-based authentication providers and JAAS APIs for programmatic security.

Forces
  • You need to authenticate users and provide access control to business components.

  • You want a straightforward, declarative security model based on static mappings.

  • You want to prevent developers from bypassing security requirements and inadvertently exposing business functionality.

Solution

Use Container Managed Security to define application-level roles at development time and perform user-role mappings at deployment time or thereafter.

In a J2EE application, both ejb-jar.xml and web.xml deployment descriptors can define container-managed security. The J2EE security elements in the deployment descriptor declare only the logical roles as conceived by the developer. The application deployer maps these application domain logical roles to the deployment environment.

Container Managed Security at the Web tier uses delayed authentication, prompting the user for login only when a protected resource is accessed for the first time. On this tier, it can offer security for the whole application or specific parts of the application that are identified and differentiated by URL patterns. At the Enterprise Java Beans tier, Container Managed Security can offer method-level, fine-grained security or object-level, coarse-grained security.

Structure

Figure 10-5 depicts a generic class diagram for a Container Managed Security implementation. Note that the class diagram can only be applicable to the container's implementation of Container Managed Security. The J2EE application developer would not use such a class structure, because it is already implemented and offered by the container for use by the developer.

Figure 10-5. Container Managed Security class diagram


Participants and Responsibilities

Figure 10-6 depicts a sequence of operations involved in fulfilling a client request on a protected resource on the Web tier that uses an EJB component on the Business tier. Both tiers leverage Container Managed Security for authentication and access control.

Figure 10-6. Sequence diagram leveraging Container Managed Security


Client. A client sends a request to access a protected resource to perform a specific task.

Container. The container intercepts the request to acquire authentication credentials from the client and thereafter authenticates the client using the realm configured in the J2EE container for the application.

Protected Resource. The security policy of the protected resource is declared via the Deployment Descriptor. Upon authentication, the container uses the Deployment Descriptor information to verify whether the client is authorized to access the protected resource using the method, such as GET and POST, specified in the client request. If authorized, the request is forwarded to the protected resource for fulfillment.

Enterprise Java Bean. The protected resource in turn could be using a Business Tier Enterprise Java Bean that declares its own security policy via the ejb.jar deployment descriptor. The security context of the client is propagated to the EJB container while making the EBJ method invocation. The EJB container intercepts the requests to validate against the security policy much like it did in the Web tier. If authorized, the EJB method is executed, fulfilling the client request. The results of execution of the request are then returned to the client.

Strategies

Container Managed Security can be used in the Web and Business tiers of a J2EE application, depending on whether a Web container, an EJB container, or both are used in an application. It can also be supplemented by Bean Managed/Programmatic Security for fine-grained implementations. The various scenarios are described in this section.

Web Tier Container Managed Security Strategy

In this strategy, security restraints are specified in the web.xml of the client/user-facing Web application (that is, the Web tier of the J2EE application). If this is the only security strategy used in the application, an assumption is made that the back-end Business tier is not directly exposed to the client for direct integration. The web.xml declares the authentication method via the <auth-method> node of the web.xml to mandate either BASIC, DIGEST, FORM, or CLIENT-CERT authentication modes whenever authentication is required. It also declares authorization for protected resources that are identified and distinguished by their URL patterns. The actual enforcement or security is performed by the J2EE-compliant Web container in this strategy.

Service Tier Container Managed Security Strategy

In this strategy, the developer configures the EJB's deployment descriptors to incorporate security into the service backbone of the application. A security role in EJB's ejb-jar.xml is defined through a <security-role-ref> element. These bean-specific logical roles can be associated to a security role defined with a different name in the <role-name> elements of the application deployment descriptor via a <role-link> element. The <assembly-descriptor> section of ejb-jar.xml, which is the application-level deployment descriptor, lists all the logical application-level roles via <role-name> elements, and these roles are mapped to the actual principals in the realm at the time of deployment.

Declarative Security for EJBs can either be at the bean level or at a more granular method level. Home and Remote interface methods can declare a <method-permission> element that includes one or more <role-name> elements that are allowed to access one or more EJB methods as identified by the <method> elements. One can also declare <exclude-list> elements to disable access to specific methods.

To specify an explicit identity that an EJB should use when it invokes methods on other EJBs, the developer can use <use-caller-identity> or <run-as>/<role-name> elements under the <security-identity> element of the deployment descriptor.

Container Manager Security in Conjunction with Programmatic Security

For finer granularity or to meet requirements unfulfilled by Container Managed Security, a developer could choose to use programmatic security in bean code or Web tier code in conjunction with Container Managed Security. For example, in the EJB code, the caller principal as a java.security.Principal instance can be obtained from the EJBContext.getCallerPrincipal() method. The EJBContext.isCallerInRole(String) method can determine if a caller is in a role that is declared with a <security-role-ref> element. Similarly, on the Web tier, HttpServletRequest.getUserPrincipal() returns a java.security.Principal object containing the name of the current authenticated user, and HttpServletRequest.isUserInRole(String) returns a Boolean indicating whether the authenticated user is included in the specified logical role. These APIs are very limited in scope and are confined to determining a user's identity and role membership.

This approach is useful where instance-level security is required, such as permitting only the admin role to perform account transfers exceeding a certain amount limit. A simple example is illustrated in Example 10-5 later in this chapter.

Note: EJB and Business Helper Classes

J2EE applications consist of a mixture of EJBs and business helper classes. EJBs are commonly used for remoting, transactionality, and state management. Business helper classes are used to provide utility or common functionality to the EJBs. The helper classes typically contain logic that is not called directly from a remote source, is not impacted by the context of a transaction, and does not maintain any state.

One of the benefits of J2EE is its robust security model. The container provides declarative and programmatic security to the Web tier resources and EJBs. Unfortunately, developers often do not design the Business tier with security in mind. They neglect to factor in the security considerations of separating business logic into EJBs and helper classes. The impact to security comes when helper classes are accessed from different EJBs in different contexts and have business logic that should be protected by the container. The container is unable to enforce access control on the helper classes because it does not recognize themit only recognizes EJBs. It is therefore critical that developers understand the security ramifications of breaking logic into business helper classes.

For performance reasons, it is hard to avoid using helper classes. EJBs have a significant overhead; helper classes, which are plain old Java objects (POJOs), do not. Therefore, an architect or a developer responsible for designing Business tier objects must understand how security is enforced by the container and what implications business helper classes pose. In cases where a business helper class is only used by one EJB, or in one security context, there is no concern. Business helper classes that are used by different EJBs in different security contexts, allow their logic to be accessed in all of those contexts. This may or may not be desirable, but it is necessary that it be understood.


Consequences

Container Managed Security offers flexible policy management at no additional cost to the organization. While it allows the developer to incorporate security in the application by way of simply defining roles in the deployment descriptor without writing any implementation code, it also supports programmatic security for fine-grained access control. The pattern offers the following other benefits to the developer:

  • Straightforward, declarative security model based on static mappings. The Container Managed Security pattern provides an easy-to-use and easy-to-understand security model based on declarative user-to-role and role-to-resource mappings.

  • Developers are prevented from bypassing security requirements and inadvertently exposing business functionality. Developers often advertently or inadvertently bypass security mechanisms within the code. Using Container Managed Security prevents this and ensures that EJB methods are adequately protected and properly restricted at deployment time by the application deployer.

  • Less prone to security holes. Since security is implemented by a time-tested container, programming errors are less likely to lead to security holes. However, the security functionality offered by the container could be too limited and inflexible to modify.

  • Separation of security code from business objects. Since the container implements the security infrastructure, the application code is free of security logic. However, developers often end up starting with Container Managed Security and then using programmatic security in conjunction with it, which leads to mangled code with a mixture of declarative and programmatic security that is difficult to manage.

Sample Code

Sample code for each strategy described earlier is illustrated in this section. The samples could be used in conjunction with each other to implement multiple flavors of Container Managed Security.

Example 10-3 shows declarative security via a web.xml deployment descriptor.

Example 10-3. web.xml deployment descriptor
        <web-app>         ...         <security-constraint>            <display-name>App Sec Constraints </display-name>            <web-resource-collection>            <web-resource-name>               System Admin Resources            </web-resource-name>            <url-pattern>/sysadmin/*</url-pattern>            <http-method>GET</http-method>            <http-method>POST</http-method>            </web-resource-collection>            <auth-constraint>               <role-name>CORPORATEADMIN</role-name>               <role-name>CLIENTADMIN</role-name>            </auth-constraint>            <user-data-constraint>               <transport-guarantee>                  NONE               </transport-guarantee>            </user-data-constraint>         </security-constraint>         <!-- Declare login configuration here -->         <login-config>            <auth-method>FORM</auth-method>            <form-login-config>               <form-login-page>                  /login.jsp               </form-login-page>               <form-error-page>                  /login.jsp               </form-error-page>            </form-login-config>         </login-config>         <security-role>            <description>Corporate Administrators</description>            <role-name>CORPORATEADMIN</role-name>            </security-role>         <security-role>            <description>Client Administrators</description>            <role-name>CLIENTADMIN</role-name>         </security-role>         ... </web-app>

Example 10-4 shows declarative security via an ejb-jar.xml deployment descriptor.

Example 10-4. ejb-jar.xml deployment descriptor
        ...    <enterprise-beans>    ...       <session>              <ejb-name>SecureServiceFacade</ejb-name>              <ejb-class>SecureServiceFacade.class</ejb-class>              ...              <security-role-ref>                 <role-name>                    "admin_role_referenced_by_bean"                 </role-name>                 <role-link>                    admin_role_depicted_in_assembly_descriptor                 </role-link>              </security-role-ref>      ...           </session>    </enterprise-beans>    ...    <assembly-descriptor>       <security-role>              <description>                 Security Role for Administrators              </description>              <rolename>                 admin_role_depicted_in_assembly_descriptor              </role-name>           </security-role>           ...           <method-permission>              <role-name>GUEST</role-name>              <method>                 <ejb-name>PublicUtilities</ejb-name>            <method-name>viewStatistics</method-name>              </method>           </method-permission>          ...           <exclude-list>              <description>Unreleased Methods</description>              <method>                 <ejb-name>PublicUtilities</ejb-name>            <method-name>underConstruction</method-name>              </method>           </exclude-list>           ...    </assembly-descriptor>         ...

Example 10-5 shows programmatic or bean-managed security in the bean code.

Example 10-5. EJB method employing programmatic security
   //...    public void transfer(double amount, long fromAccount,                         long toAccount){       if (amount>1000000 &&               !sessionContext.isCallerInRole("admin")){          throw new EJBException(               sessionContext.getCallerPrincipal().getName() +               " not allowed to transfer amounts exceeding " +               1000000.");       }          else {          //perform transfer       }    }    //...

Security Factors and Risks

The extent of security offered by this pattern is limited to the security mechanisms offered by the container where the application code is deployed. It is also constrained by the limited subset of security aspects covered in the J2EE specification. As a result, the pattern elicits several risks:

  • Limitations to fine-grained security. Use of Container Managed Security limits the ability of the application to incorporate fine-grained security such as that based on an object's run-time attribute values, time of day, and physical location of the client. These deficiencies could be overcome by programmatic security inside business components, but the security context information accessible to the component code is limited to principal information and the role association.

  • Requires preconceived granularity of roles. Container Managed Security necessitates a preestablished notion of roles at the granularity level required by the application over the foreseeable future. This is because roles need to be defined in the deployment descriptor for each Web tier resource, ejb-tier business objects, or business methods before the application is packaged and deployed. Retrofitting additional roles after deployment would require repackaging the application with new deployment descriptors.

  • Too limiting. Container Managed Security of the J2EE specification omits many aspects of integration between the container and the existing security infrastructure and limits itself to authentication and role-based access control. This may be too limiting for certain requirements, making programmatic security inevitable.

Reality Check

Is Container Managed Security comprehensive at the Web tier? If the granularity of security enforcement is not matched by the granularity offered by the resource URL identifiers used by Container Managed Security to distinguish and differentiate resources, this pattern may not fulfill the requirements. This is particularly true in applications that use a single controller to front multiple resources. In such cases, the request URI would be the same for all resources, and individual resources would be identified only by way of some identifier in the query string (such as /myapp/controller?page=resource1). Container Manager Security by URL patterns is not applicable in such cases unless the container supports extensive use of regular expressions. Resource-level security in such scenarios requires additional work in the application.

Is Container Managed Security required at the service tier? If all the back-end business services are inevitably fronted by a security gateway such as Secure Service Proxy or Secure Service Façade, having additional security enforcement via Container Managed Security on EJBs may not add much value and may incur unnecessary performance overhead. The choice must be carefully made in such cases.

Related Patterns

Authentication Enforcer, Authorization Enforcer. Authentication Enforcer enforces authentication on a request that is antecedently unauthenticated, much like what Container Managed Security implementation can enforce on the Web tier resource of a J2EE application. Similarly, Authorization Enforcer behaves like the Business tier implementation of Container Managed Security

Secure Service Proxy. If security architecture was not planned in the initial phases of application development, utilization of Container Managed Security at later stages may seem chaotic. In such cases, Secure Service Proxy or Secure Service Façade can be used to offer a secure gateway exposed to the client that enforces security in lieu of such enforcement at the business service level.

Intercepting Web Agent. Rather than custom-building security via deployment descriptors and configuring the container as in Container Manager Security, one may delegate those tasks to a COTS product, with the application using Web Agent Interceptor to preprocess the security context of the requests before they are forwarded and fulfilled by the security-unaware application services.

Dynamic Service Management

Problem

You need to dynamically instrument fine-grained components to manage and monitor your application with the necessary level of detail.

Management is an important, if overlooked, aspect of security. There is the monitoring aspect that security administrators use to detect intrusions and other anomalies caused by malicious activity. Then there is the active management aspect that empowers administrators to proactively prevent intrusions by modifying objects or invoking operations before an attack can conclude.

Consider a scenario where an intruder launches a denial-of-service (DoS) attack against an LDAP server that causes the application to time out and drop the connection to the LDAP server, thus preventing new users from logging in. In many implementations, the only remedy to this scenario would be to restart the application, causing logged-in users to be dropped and forced to log in again. Ideally, an administrator would want to be able to monitor the connection, detect that it is not responding, determine why, take steps to stop the DoS attack, and then invoke an operation on the object responsible for connecting to LDAP that forces it to reestablish the connection.

Another scenario involves an authenticated user who is exhibiting malicious activity in the application. Security administrators would like the ability to detect such activity, log the user out, and disable that user's account. All of this requires a level of instrumentation not available in most applications today.

Forces
  • You want to instrument POJO business objects that the container does not monitor for you.

  • You have many business objects and want to adjust instrumentation at runtime as needed to provide security monitoring and real-time forensic data gathering.

  • You want to monitor and actively manage business objects to tightly control security and proactively prevent attacks in progress.

  • You want to use industry-standard Java Management Extension (JMX) technology to ensure a vendor-neutral solution.

Solution

Use a Dynamic Service Management pattern to enable fine-grained instrumentation of business objects at runtime on an as-needed basis using JMX.

Structure

Figure 10-7 illustrates a Dynamic Service Management pattern class.

Figure 10-7. Dynamic Service Management class diagram


Participants and Responsibilities

Figure 10-8 is a sequence diagram of the Dynamic Service Management pattern.

Figure 10-8. Dynamic Service Management sequence diagram


Client. A Client requests registration of an object as an MBean from the ServiceManager.

ServiceManager. The ServiceManager creates an instance of the MBeanServer and obtains an instance of an MBeanFactory. ServiceManager instantiates the Registry and then uses the MBeanFactory to create an MBean for a particular object passed in by the Client. It creates an ObjectName for that object and then registers it with the MBeanServer.

MBeanServer. The MBeanServer exposes registered MBeans via adaptor-specific protocols.

MbeanFactory. The MBeanFactory creates the Registry and uses it to find managed MBean definitions, which it loaded from the Descriptor Store.

Registry. The Registry loads and maintains a registry of MBean descriptors. It creates a Registry Monitor to monitor changes to the DescriptorStore and reloads the definitions when the RegistryMonitor notifies it that the DescriptorStore has changed.

RegistryMonitor. The RegistryMonitor is responsible for monitoring changes to the DescriptorStore. It registers listeners and notifies those listeners when it detects a change to the DescriptorStore.

DescriptorStore. The DescriptorStore is an abstract representation of a persistent store of MBean descriptor definitions.

Figure 10-8 shows the following sequence for registering an object as an MBean using the Dynamic Service Management pattern.

1.

ServiceManager creates an instance of an MBeanServer.

2.

ServiceManager calls getInstance on MBeanFactory.

3.

MBeanFactory, upon initial creation, creates an instance of Registry.

4.

Upon creation, Registry calls its loadRegistry method, which loads MBean descriptors from the DescriptorStore.

5.

Registry then creates an instance of RegistryMonitor.

6.

Registry then adds itself as a listener to that DescriptorStore through a call to addListener method.

On call to addListener method, RegistryMonitor stores listener and begins polling DescriptorStore passed in as argument to addListener.

1.

Client invokes registerObject on ServiceManager.

2.

ServiceManager invokes createMBean on MBeanFactory.

3.

MBeanFactory calls findManagedBean on Registry.

4.

ServiceManager then calls createObjectName on MBeanFactory.

5.

Once the MBean and an ObjectName for it have been created, the ServiceManager invokes registerMBean on the MBeanServer, passing in the MBean instance and its corresponding ObjectName.

Strategies

The Dynamic Service Management pattern provides dynamic instrumentation of business objects using JMX. JMX is a commonly used technology, present in all major application server products. There are several strategies for implementing this pattern, depending on what product you choose and what type of persistent store you require for your MBean Descriptors.

Model MBean Strategy

This strategy involves using JMX Model MBean loaded from an external configuration source. Model MBeans allow developers to define the attributes and operations they want to expose on their classes through metadata. This metadata can then be externalized from the class definition entirely. With a bit of work, the metadata can be reloaded at runtime to allow for just-in-time creation of MBeans as needed.

The Jakarta Commons subproject of the Apache Software Foundation is focused on building open source, reusable Java components. One of the components of the Commons project is the Commons-Modeler. Commons-Modeler provides a framework for creating JMX Model MBeans that allows developers to circumvent the creation of the metadata programmatically (as described in the specification) and instead defines that data in an XML descriptor file. This greatly reduces the amount of source code needed to create the Model MBeans.

The Model MBean Strategy utilizes the Commons-Modeler framework approach to simplify the task of creating MBeans and to leverage the file-based XML descriptor to implement dynamic reloading of MBeans based on changes to that descriptor file at runtime. This provides a mechanism that allows developers and operations staff to instrument components on an as-needed basis instead of incurring the run-time overhead of trying to instrument all of the components statically, most of which will never be used.

Figure 10-9 depicts a class diagram of a Dynamic Service Management pattern implemented using the Model MBean strategy.

Figure 10-9. Model MBean Strategy class diagram


Figure 10-10 is sequence diagram of the Dynamic Service Management pattern implemented using the Model MBean strategy. In this strategy, the Commons-Modeler framework supplies the Registry implementation and provides an XML DTD for the MBeans descriptor file. The Registry does all the work of creating the MBean from the data in the descriptor file, which is the bulk of the work overall. A simple file monitor can be implemented to detect changes to the XML file and the Registry can be told to reload from the changed file.

Figure 10-10. Model MBean Strategy sequence diagram


Consequences

The Dynamic Service Management pattern helps to identify and mitigate several types of threats. By enabling operations staff to monitor business components, they can readily identify an attack in progress, whether it is a denial-of-service attack or somebody trying to guess passwords using a dictionary attack. It also enables staff to manage those components so that they can take reactive action during an attack, such as setting a filter on an incoming IP or locking a user account. By employing the Dynamic Service Management pattern, developers can benefit from the following:

  • Instrumentation of POJO business objects. Using a Dynamic Service Management pattern provides a means to instrument POJOs so that their attributes and operations can be managed and monitored based on definitions defined in a descriptor file. This allows operations staff to probe down into the business objects themselves to troubleshoot or collect data.

  • Adjust instrumentation at runtime as needed. Today, systems are built with static management and monitoring capabilities. These capabilities incur a run-time cost in terms of performance, memory, and complexity. They also do not provide the ability to manage or monitor subsequent components or attributes at runtime as the needs arise. They require a large amount of upfront analysis and speculation to determine what to manage and monitor. The Dynamic Service Management pattern allows you to instrument thousands of business components on an as-needed basis.

  • Use industry-standard Java Management Extension (JMX) technology. The Dynamic Service Management can be used in conjunction with JMX to ensure that a completely vendor-independent management and monitoring solution can be implemented.

Using a Dynamic Service Management pattern eliminates the need for upfront analysis and needless run-time overhead from monitoring or exposing components and attributes unnecessarily. Instead, components and attributes can be dynamically instrumented at runtime on an as-needed basis. When the need no longer exists, the instrumentation can be turned off, freeing up cycles and memory for business processing.

Sample Code

Example 10-6 is a sample source listing of a Service Manager class.

Example 10-6. MBeanManager.java: MBeanManager implementation
package com.csp.management; import java.util.Enumeration; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.modelmbean.ModelMBean; import javax.naming.Context; import com.sun.jdmk.comm.HtmlAdaptorServer; public class MBeanManager implements ManagedObject {    private static MBeanManager instance = null;    private HtmlAdaptorServer htmlServer = null;    private MBeanFactory factory = null;    private HashMap objNames = new HashMap();    // This class returns an instance of MBeanManager    public static MBeanManager getInstance() {       if(instance == null) {         instance = new MBeanManager();         try {               instance.registerObject(instance, "CSPM");            }            catch (Exception e) {               log.error("Unable to register mbean.", e);            }        }        return instance;    }     // Create and initialize the MBeanManager     private MBeanManager() {    init();     }    // Initializes the adaptors and servers.    private void init() {      htmlServer = new HtmlAdaptorServer();    htmlServer.setPort(4545);    String htmlServiceName = "Adaptor:name=html,              port=" + port;        // Create the Object name to register with.    ObjectName htmlObjectName =                   new ObjectName(htmlServiceName);        objNames.put(htmlServiceName, htmlObjectName);        htmlServer.start();        // Load MBean factory        factory = MBeanFactory.getInstance();    }    // Register a service object as an MBean.    public void registerObject(Object service,         String serviceName) throws Exception {       ModelMBean mbean =          factory.createMBean(service, serviceName);       if(mbean == null) {          return;       }       // Create the ObjectName       ObjectName objName = factory.createObjectName(             mbeanDomain, service, serviceName);       if (objName == null) {          log.error("Could not create object name.");          return;       }       // Get the MBeanServer       MBeanServer wlsServer = getWLSMBeanServer();       // Register the MBean with the server       wlsServer.registerMBean(mbean, objName);       // Add the ObjectName to the list of names       objNames.put(serviceName, objName);    }    // Method to unregister an object as an MBean    public void unregisterObject(String serviceName) {       try {          if(serviceName != null && objNames != null) {             // Remove the ObjectName from the list             ObjectName oName =              (ObjectName)objNames.remove(serviceName);             if(oName != null) {                MBeanServer server = getMBeanServer();                // Unregister the bean from the server                server.unregisterMBean(oName);             }          }       }       catch(Exception e) {          log.error("Unable to unregister service.", e);       }    }    // Method to reload the MBean descriptor from the registry    public void reloadMBeans() throws Exception {    // Unload previously registered mbeans        unloadMBeans();        // Tell factory to reload new MBeans.        factory.loadRegistry();    }    // Unload the MBeans    public void unloadMBeans() throws Exception {       // Get a handle to all of our registered MBeans       Enumeration svcNames = objNames.keys();       while(svcNames.hasMoreElements()) {          String svc = (String) svcNames.nextElement();          // Iterate through list unregistering each MBean          unregisterObject(svc);       }    } }

Example 10-7 is a sample source code listing of an MBeanFactory class.

Example 10-7. MBeanFactory.java: MBean factory implementation
package com.csp.management; import java.io.IOException; import java.io.InputStream; import java.net.URL; import java.io.FileNotFoundException; import javax.management.modelmbean.ModelMBean; import javax.management.ObjectName; import org.apache.commons.modeler.ManagedBean; import org.apache.commons.modeler.Registry; import com.csp.logging.SecureLogger; import com.csp.management.FileMonitor; import com.csp.management.FileChangeListener; /**  * This class is responsible for creating, loading and  * reloading the MBean descriptor registry.  */ public class MBeanFactory implements FileChangeListener {    private static SecureLogger log =      (SecureLogger)SecureLogger.getLogger();    private static MBeanFactory instance = null;    private Registry registry = null;    private String registryFileName =      "mbeans-descriptors.xml";    private FileMonitor fileMonitor = null;    private static final Object lock = new Object();    // Private constructor    private MBeanFactory() {       init();    }    // Initialization method for loading the MBean descriptor    // registry and adding a file listener to detect changes.    private void init() {       loadRegistry();       try {          fileMonitor.getInstance().addFileChangeListener(           this, registryFileName);       }       catch (FileNotFoundException fnfe) {          log.error("Unable to add listener.");       }    }    // Load the MBean descriptor registry.    public void loadRegistry() {       InputStream inputStream = null;       try {          inputStream = ClassLoader.getSystemClassLoader().                 getResourceAsStream(registryFileName);          // Get the registry          registry = Registry.getRegistry(null, instance);          // Load the descriptors from the input stream.          registry.loadDescriptors(inputStream);       }       catch (Exception e) {          log.error("Unable to load file.", e);       }    }    // Returns an MBeanFactory instance    public static MBeanFactory getInstance()          throws Exception {       if (instance == null) {          instance = new MBeanFactory();       }       return instance;    }    // Create a ModelMBean given a service and name    public ModelMBean createMBean(Object service,              String serviceName)          throws Exception {       ModelMBean mbean = null;       // Create an MBean from the Registry       ManagedBean managed =       registry.findManagedBean(serviceName);       if (managed != null) {                 mbean = managed.createMBean(service);       }       return mbean;    }    // Create an ObjectName for a service.    public ObjectName createObjectName(String domain,         Object service, String serviceName)          throws Exception {       ObjectName oName = null;       if(service instanceof ManagedObject) {          ManagedObject svcImpl = (ManagedObject)service;          // Set the JMX name to the input service name.          svcImpl.setJMXName(serviceName);          // Create the ObjectName          oName = new ObjectName(domain + "Name:" +                      svcImpl.getJMXName() + ",Type=" +                      svcImpl.getJMXType());       }       else {          oName = new ObjectName(domain + ":service=" +             serviceName  + ",className=" +             service.getClass().getName());       }       return oName;    }    public String getRegistryFileName() {       return this.registryFileName;    }    public void setRegistryFileName(String fileName) {       this.registryFileName = fileName;    }    public void fileChanged(String fileName) {       try {          loadRegistry();       }       catch(Exception e) {          log.error("Failed to reload registry.");       }    } }

Security Factors and Risks

The following are some of the security factors and risks related to the Dynamic Service Management pattern:

  • Authentication. The Dynamic Service Management pattern allows operations staff to manage and monitor business components. This poses a security concern because unauthorized access to the components from outside the application could allow the application to be subverted. It is therefore necessary to implement a solution that requires proper authentication of users through the management interface.

  • Authorization. It is necessary to control access and enforce authorization of users using the management interface. An access-control model needs to be incorporated into the solution to ensure that users only have the capabilities necessary to their roles. There is probably a need for different levels of access, such as monitor-only capabilities versus management and monitoring capabilities. A robust model needs to be implemented in the management interface in the same way that it needs to be incorporated into the application interface.

  • Confidentiality. Communication via the management protocol needs to be secured to guarantee confidentiality.

  • Auditing. Management of business objects during runtime can pose serious security concerns. It is therefore necessary to audit all such activities so that security personnel can determine what management operations were performed on the application and by whom.

Reality Check

What types of things need to be managed and monitored? What should be managed and monitored is very subjective and depends on the circumstances. The Dynamic Service Management pattern provides a means to transparently attach management and monitoring capabilities to business objects without prior consideration or elaboration of those objects. But to be effective, developers must at least understand what the approach provides and design their business objects to be taken advantage of by the JMX framework. If they do not implement member variables and choose to pass in only complex Objects as parameters to their method calls, they will be unable to make use of the framework in many cases.

Related Patterns

Secure Pipe. The Dynamic Service Management pattern makes use of the Secure Pipe pattern to provide confidentiality when communicating with the application via the management protocol.

Obfuscated Transfer Object

Problem

You need a way to protect critical data as it is passed within application and between tiers.

Transfer Objects [CJP2] provide a mechanism for transporting data elements across tiers and components. This is an efficient means of moving large sets of data without invoking multiple getter or setter methods on remote objects across tiers. You probably use strategies such as Updateable Transfer Objects or Multiple Transfer Objects when implementing the Transfer Object pattern. In many cases you then find yourself passing Transfer Objects across multiple components. This leads to a security concern.

By passing data in Transfer Objects across components, you unnecessarily expose data to components that may not require or should not have access to it. Consider an application that stores credit card information in a user's profile. The application passes the profile using a profile transfer object. This profile transfer object passes through many business and presentation tier components on its way to being stored in the database. Many of those components are not privy to the sensitive nature of the credit card data in the profile transfer object. They may print all of the data in the transfer object for debugging purposes or write it to an audit log that is not supposed to expose sensitive data. You do not want to modify all of those components just to handle that data differently. Instead, you want the transfer object itself to take responsibility for protecting that data.

Forces
  • You want to protect sensitive data passed in Transfer Objects from being captured in console messages, log files, or audit logs.

  • You want the Transfer Object to be responsible for protecting the data in order to reduce code and prevent business components from inadvertently exposing sensitive data.

  • You want to specify which data elements are protected, since not all data should be protected and may need to be exposed.

Solution

Use an Obfuscated Transfer Object to protect access to data passed within and between tiers.

The Obfuscated Transfer Object allows developers to define data elements within it that are to be protected. The means of protection can vary between applications or implementations depending on the business requirements. The Obfuscated Transfer Object provides a way to prevent either purposeful or inadvertent unauthorized access to its data.

The producers and consumers of the data can agree upon the sensitive data elements that need to be protected and on their means of access. The Obfuscated Transfer Object will then take the responsibility of protecting that data from any intervening components that it is passed to on its way between producer and consumer. Credit card and other sensitive information can be protected from being accidentally dumped to a log file or audit trail, or worse, such as being captured and stored for malicious purposes.

Structure

Figure 10-11 is the class diagram for the Obfuscated Transfer Object.

Figure 10-11. Obfuscated Transfer Object class diagram


Participants and Responsibilities

Figure 10-12 shows the sequence diagram of the Obfuscated Transfer Object pattern.

Figure 10-12. Obfuscated Transfer Object sequence diagram


Client. The Client wants to send and receive data from a Target component via an intermediary Component. The Client can be any component in any tier.

Component. The Component is any application component in the message flow that is not the intended target of the Client. The Component can be any component in any tier that acts as an intermediary in the message flow between the Client and the Target.

Target. The Target is any object that is the intended recipient of the Client's request. It is responsible for setting the data that needs to be obfuscated.

Obfuscated Transfer Object. The ObfuscatedTransferObject is responsible for protecting access to data within it, as necessary.

Typically, the intermediary Component is not trusted or should not have access to any or all data in the Obfuscated Transfer Object. It then becomes the Obfuscated Transfer Object's responsibility to protect the data. The means of protection is dependent upon the business requirements and the level of trust of the intermediary components. Figure 10-12 takes us through a typical sequence of events for an application employing an ObfuscatedTransferObject.

1.

Client creates an ObfuscatedTransferObject, setting the necessary request data.

2.

Client serializes the ObfuscatedTransferObject and applies required obfuscation mechanisms.

3.

Client sends the serialized ObfuscatedTransferObject to an intermediary Component.

4.

The Component invokes toString on the ObfuscatedTransferObject and writes the output to a file. In this case, none of the protected data is listed in that output.

5.

The Component sends the ObfuscatedTransferObject to the Target.

6.

The Target retrieves the protected (obfuscated) data elements.

7.

The Target sets new data that requires protection.

8.

The Target logs the ObfuscatedTransferObject; again, the protected data is not output.

9.

The Target returns the ObfuscatedTransferObject to the Client.

10.

The Client retrieves the newly set data.

Strategies

A variety of strategies can implement the Obfuscated Transfer Object. A simple strategy is just to mask various data elements to prevent them from inadvertently being logged or displayed in an audit event. A more elaborate strategy is to encrypt the protected data within the Obfuscated Transfer Object. This entails a more complex implementation, but offers a higher degree of protection.

Masked List Strategy

Sensitive information like credit card numbers, Social Security numbers, and other personal information should not be stored in the system for security purposes. Since intermediary components within a request workflow may be unaware of the existence of such data in the Transfer Object, or do not know what data not to log, a simple Masked List Strategy prevents inadvertent storage or display of this data. Figure 10-13 shows a Masked List Strategy class diagram. Figure 10-14 shows a Masked List Strategy Sequence Diagram.

Figure 10-13. Masked List Strategy class diagram


Figure 10-14. Masked List Strategy sequence diagram


1.

The Client creates an ObfuscatedTransferObject, sets the data, and sends it to the Target component via an intermediary Component.

2.

The Component is any application component in the message flow that is not the intended target of the Client. The Component may log the ObfuscatedTransferObject.

3.

The Target is any object that is the intended recipient of the Client's request. It retrieves the data that needs to be obfuscated.

4.

The ObfuscatedTransferObject does not output data in the masked list. That data is only retrieved when specifically asked for.

In this strategy, the client sets data as name-value (NV) pairs in the Obfuscated Transfer Object. Internally, the Obfuscated Transfer Object maintains two maps, one for holding NV pairs that should be obfuscated and another for NV pairs that do not require obfuscation. In addition to the two maps, the Obfuscated Transfer Object contains a list of NV pair names that should be protected. Data passed in with names corresponding to names in the masked list, are placed in the map for the obfuscated data. This map is then protected. In the sequence above, when the Component logs the ObfuscatedTransferObject, the data in the obfuscated map is not logged, and thus it is protected.

Encryption Strategy

Using the Encryption Strategy for Obfuscated Transfer Object provides the highest level of protection for the data elements protected within. The data elements are stored in a Data Map, and then the Data Map as a whole is encrypted using a symmetric key. To retrieve the Data Map and the elements within it, the consumer must supply a symmetric key identical to the one used by the producer to seal the Data Map.

The Sun Java 2 Standard Edition (J2SE) runtime provides a Sealed Object class that allows developers to easily encrypt objects by passing in a serialized object and a Cipher object in the constructor. The serialized object can then be retrieved by either passing in an identical Cipher or a Key object that can be used to recreate the Cipher. This encapsulates all of the underlying work associated with encrypting and decrypting objects.

The only issue remaining is the management of symmetric keys within the application. This poses a significant challenge because it requires the producers and consumers to share symmetric keys without providing any intermediary components with access to those keys. This may be simple or overwhelmingly complex depending on the architecture of the application and the structure of the component trust model. Use this strategy with caution, because the key-management issues may be harder to overcome than architecting the application again to eliminate the need for the pattern.

Figure 10-15 is a sequence diagram illustrating the Encryption Strategy for an Obfuscated Transfer Object.

Figure 10-15. Encryption Strategy sequence diagram


1.

The Client creates an ObfuscatedTransferObject, sets the data, and sends it to the Target component via an intermediary Component.

2.

The Component is any application component in the message flow that is not the intended target of the Client. The Component may log the ObfuscatedTransferObject.

3.

The ObfuscatedTransferObject creates a Sealed object by encrypting the given serializable object using an encryption key. That data is only retrieved when the proper decryption key is supplied.

4.

The Target is any object that is the intended recipient of the Client's request. It retrieves the obfuscated data by supplying the key used to decrypt it.

The sequence diagram shown in Figure 10-15 illustrates implementation of the Obfuscated Transfer Object using an Encryption Strategy. The client creates the Obfuscated Transfer Object and adds the data as name value pairs. The client then seals the data by passing in an encryption key. The intermediate components in the request flow are unable to access the data. The target object, upon receiving the Obfuscated Transfer Object, first unseals it by passing in the corresponding decryption key. It can then access the data as before, through the name-value pair keys.

Consequences

The Obfuscated Transfer Object protects against sniffing attacks and threats arising from log-file capture within the Business tier by ensuring that sensitive data is not passed or logged in the clear. By employing the Obfuscated Transfer Object pattern, the following consequences will apply:

  • Confidentiality: Generic protection of sensitive data passed in Transfer Objects. The Obfuscated Transfer Object provides a means of generically protecting data passed between components and tiers from being improperly accessedsuch as for logging or auditing, or purposefully, in the case of untrusted intermediary components.

  • Centralized encryption or obfuscation code. The Obfuscated Transfer Object provides a central point for encrypting or obfuscating data that is passed in a Transfer Object. Moving the responsibility for protecting the data to the Transfer Object ensures that such code is then not required across all the components that use the Transfer Object.

  • Increased performance overhead. The code necessary to obfuscate or encrypt has associated memory and processing overhead. For large amounts of data, this overhead may significantly reduce overall performance.

  • Specify which data elements are protected and which are not. By using the Obfuscated Transfer Object, you can specify which data elements to protect and therefore only impact performance as necessary for security, which is better than alternative bulk encryption or obfuscation techniques.

Sample Code

Example 10-8 shows a sample listing of an Obfuscated Transfer Object implemented using an Encryption Strategy.

Example 10-8. Sample obfuscated TO using encryption implementation
package com.csp.business; import java.io.Serializable; import java.util.HashMap; import javax.crypto.Cipher; import javax.crypto.SealedObject; public class GenericTO implements Serializable {    final long serialVersionUID = -5831612260903682186L;    private HashMap map;    private SealedObject sealedMap;    /**     * Default constructor that initializes the object.     */    public GenericTO() {       map = new HashMap();    }    public void seal(Cipher cipher) throws Exception {       map = sealedMap.getObject(cipher);    }    public void unseal(Cipher cipher) throws Exception {       sealedMap = new SealedObject(map, cipher);       // Set the map to null so data can't be accessed.       map = null;    }    public Object getData(Object key) throws Exception {       return map.get(key(key);    }    public void setData(Object key, Object data)    throws Exception {       map.put(key, data);    } }

Security Factors and Risks

Confidentiality. The Obfuscated Transfer Object pattern provides a means to ensure varying degrees of confidentiality for data passed within the application, such as between components, across asynchronous message boundaries, and between tiers. This is necessary for applications that have sensitive data that should not be accidentally logged or displayed, or where data is passed through intermediary components that are not trusted and should not have access to that data.

Reality Check

Should we use a Masked List Strategy or an Encryption Strategy? It depends on your requirements and whether you trust your intermediary components not to access the data in the masked list. Using a Masked List Strategy, a component could access the data and dump it to a log if it wished, circumventing the intention of the masked list. By using an Encryption Strategy, the intermediary components cannot gain access to the sensitive data unless they obtain the Cipher used to protect that data. There is significant processing overhead to encrypting and decrypting the data, so you should only use this strategy when necessary and only for the data elements that require it.

Related Patterns

Transfer Object [CJP2]. The Obfuscated Transfer Object is similar to, and may be considered a strategy of, the Core J2EE Patterns Transfer Object pattern. It provides the additional capability of protecting data elements within it from unauthorized access.

Data Transfer HashMap (Middleware). The Obfuscated Transfer Object is similar to the Data Transfer HashMap pattern from the Middleware Company. Like the Data Transfer HashMap, it employs a strategy that makes use of an underlying HashMap for storing and retrieving data elements. In the case of the Obfuscated Transfer Object, that underlying map may be encrypted using a Sealed Object or may be divided into two maps, one containing data that can be dumped to a log or audit table and another containing sensitive data that should not be accessed.

Policy Delegate

Problem

You want to shield clients from discovery and invocation details of security services and to control client interactions by intercepting and administering policy on client requests.

You need an abstraction between enterprise security infrastructure and clients; hiding the intricacies of finding and invoking security services. It is desirable to abstract common framework specific code related to invocation of those services, thus reducing the coupling between clients and the security framework. As a result of the loose coupling, clients and services can then be easily replaced with alternate technologies, when appropriate, to increase the lifespan of the application.

Forces
  • You want to reduce the coupling between the security framework and the client of security services offered by the framework and reduce the number of complex security interfaces exposed to the client in order to limit touchpoints that can give way to security holes.

  • You need to manage the life cycle of a client's security context at the server and want to use it across multiple invocations by the same client.

  • You need a way to centralize Business-tier security functions so that security can be enforced across business components without impacting business developers.

Solution

Use Policy Delegate to mediate requests between clients and security services, and to reduce the dependency of client code on implementation specifics of the service framework.

Policy Delegate is a coordinator of Business-tier security services that is akin to the Secure Base Action in the Web tier. The clients use the delegate to locate and mediate back-end security services. A delegate could in turn use a Secure Service Façade that offers a coarse-grained aggregate interface to fine-grained security services or business components and entities. This abstraction also offers a looser coupling and cleaner contract between clients and the secure services, reducing the magnitude of change required in the clients when the implementations of the security services change over time.

To use a delegate, the client need not be aware of the actual location of the service. A Policy Delegate uses a Service Locator to locate distributed security services. The client is unaware of the underlying implementation technology and the communication protocol of the service, which could be RMI, Web services, DCOM, CORBA, or another service.

While coordinating and mediating requests and responses between clients and the security framework, a delegate could also perform pertinent message translation to accommodate disparate message formats and protocols both expected and supported by the clients and individual services. In the same vein, the delegate could choose to perform error translation to encapsulate service-level security exceptions as user-friendly, application-level error messages.

The Policy Delegate can be a stateless delegate or a stateful delegate. A stateful delegate, identified and looked up by an appropriate ID, can cache the security context, service references, and transient state between multiple invocations by the client. This caching at the server side optimizes and reduces the number of object creations, service lookups, and security computations. The security context could be cached as a Secure Session Object.

The clients can retrieve a security delegate using a Factory pattern [GoF]. This is particularly useful when the application exposes multiple Policy Delegates rather than one aggregate delegate that mediates between multiple services.

Structure

Figure 10-16 shows a typical Policy Delegate class diagram. The Target in the diagram represents any security service, a Secure Service Façade, or a security-unaware Session Façade. The delegate uses a SecureSessionObject to maintain the transient state associated with a client session.

Figure 10-16. Policy Delegate pattern class diagram


The single PolicyDelegate could maintain a one-to-many relationship with multiple targets, or multiple Policy Delegates could each map exactly to one of the several possible targets. In the latter case, it could make use of a Factory that returns an appropriate delegate, depending on the requested service.

Participants and Responsibilities

Figure 10-17 depicts a scenario where a client uses a Business Delegate retrieved from a Factory to invoke a security service on a SecureSessionFaçade, located using a Service Locator.

Figure 10-17. Policy Delegate sequence diagram


Client. A Client retrieves a PolicyDelegate through DelegateFactory to invoke a specific service.

PolicyDelegate. The PolicyDelegate uses ServiceLocator [CJP2] to locate the service.

SecureSessionObject. The PolicyDelegate maintains a SecureSessionObject to store transient client security context and service references between consecutive invocations by the same client.

SecureServiceFaçade, Service2. The back-end service could be implemented using any technology, such as a SecureServiceFaçade session bean or as a Web service depicted as Service2.

Strategies

The Policy Delegate pattern could be implemented in a variety of flavors depending on the magnitude of services it mediates and the approach to state management as discussed here.

  • One-to-many/one-to-one Policy Delegate. In a one-to-one Policy Delegate, a delegate takes the responsibility of controlling one specific service, resulting in as many delegates as there are back-end services. This is a more granular approach than a one-to-many Policy Delegate, where a delegate controls multiple services offering a unified aggregate interface to a client. Remote references could be lazily loaded in such a delegate to avoid unnecessary service lookups and object creations.

  • Stateless/stateful Policy Delegate. A stateful Policy Delegate maintains the state on the server side in a SecureSessionObject on behalf of the client. This is useful when clients are thin or are unaware of how security context must be preserved between invocations.

Consequences

The Policy Delegate pattern reduces the coupling between the security framework and the client of security services offered by the framework and thereby reduces the number of complex security interfaces exposed to the client. This has the overall effect of reducing complexity and therefore reducing potential software bugs that could lead to a variety of attacks. It also allows you to cache and manage the life cycle of a client's security context at the server and use it across multiple invocations by the same client, which enhances performance.

The Policy Delegate pattern benefits developers in the following ways:

  • Hides service complexity from client, exposes a cleaner interface. The client only needs to be aware of the input and output messages of the delegate and not any implementation specifics or invocation details of the complex security services.

  • Optimizes performance. By appropriate caching of the security context, the delegate could reduce repetitive computations associated with each individual request, thereby increasing the responsiveness of the system and scalability.

  • Performs message translation and error translation. Security exceptions that are hard to decipher for a nonsecurity-aware client can be translated by the delegate to user-friendly exceptions before passing them to the client.

  • Performs service discovery, failover, and recovery. Discovery and invocation details are abstracted to a central place, avoiding code duplication among clients. The delegate, being aware of the location of each security service, can also perform application-level failover and recovery from catastrophic errors.

Sample Code

Example 10-9 lists the interface of the Policy Delegate that serves as the contract between security framework and clients.

Example 10-9. Policy Delegate interface
package com.csp.business; import com.csp.*; import com.csp.interfaces.*; public interface PolicyDelegateInterface {   // Alternative 1: Declare service specific methods   public boolean authenticate(GenericTO  request)             throws AuthenticationFailureException;   public boolean authorize(GenericTO  request)             throws AuthorizationFailureException;   public SAMLMessage assertRequest(GenericTO  request)             throws ApplicationException;   // ...   // Alternative 2: Declare a generic method (execute) with   // with a generic transfer object as the inputs and outputs   public GenericTO execute(String svcName, GenericTO input)             throws ApplicationException; }

Example 10-10 lists the implementation code of the Policy Delegate. The implementation code is not relevant to the client, which only relies on the Delegate Interface and a reference to the delegate.

Example 10-10. Sample Policy Delegate implementation code
package com.csp.business; import com.csp.*; import com.csp.interfaces.*; public class PolicyDelegate implements PolicyDelegateInterface {    private AuthenticationEnforcer authenticationEnforcer;    private AuthorizationEnforcer authorizationEnforcer;    private SecureSessionManager secureSessionManager;    private SecureLogger secureLogger;    private SecureServiceFacade secureServiceFacade;    private RequestContext rc;    //Manage lifecycle of the delegate    public PolicyDelegate(RequestContext rc) {       this.rc = rc;       init(rc);    }    private void init(RequestContext rc) {       // Look up and keep references to security       // services/session facades/session beans...       try {          authenticationEnforcer = ServiceLocator.lookup(                AuthenticationEnforcer.SERVICE_NAME);          authorizationEnforcer = ServiceLocator.lookup(                AuthorizationEnforcer.SERVICE_NAME);          secureSessionManager = ServiceLocator.lookup(                SecureSessionManager.SERVICE_NAME);          secureLogger = ServiceLocator.lookup(                SecureLogger.SERVICE_NAME);          //...          secureServiceFacade = ServiceLocator.lookup(                SecureServiceFacade.SERVICE_NAME);       } catch (Exception e) {          throw new ApplicationException(e);       }    }    public void destroy() {       secureSessionManager.invalidate(rc);   }    //implement delegate methods    // Alternative 1: Declare service specific methods    public boolean authenticate(GenericTO request)          throws AuthenticationFailureException {       try {          // Return the results of authentication          return authenticationEnforcer.authenticate(request);       } catch (SecurityFrameworkException e) {          throw new AuthenticationFailureException(e);       }    }    // Authorize the request.    public boolean authorize(GenericTO request)          throws AuthorizationFailureException {       try {          // Check the request is authenticated          if(!request.authenticated()){             if(!authenticationEnforcer.authenticate(request))                throw new AuthorizationFailureException(                   new AuthenticationFailureException())          }          // Return the result of authorization          return authenticationEnforcer.authorize(request);       } catch (SecurityFrameworkException e) {          throw new AuthorizationFailureException(e);       }    }   // Alternative 2: Implement a generic method with generic   // transfer object as input or output   public GenericTO execute(String serviceName, GenericTO input) throws ApplicationException{     //Validate request as per security policy     if(!input.authenticated()){       if (!authenticationEnforcer.authenticate(input))         throw new AuthenticationFailureException();     }     if(!input.authorized()){       if (!authorizationEnforcer.authorize(input))         throw new AuthorizationFailureException();     }     //process request     GenericService service = ServiceLocator.lookup(serviceName);     return service.execute(input);   } }

Example 10-11 lists a sample client code that uses a Policy Delegate.

Example 10-11. Client code using Policy Delegate
// ...   try{      // Get a dynamic proxy from the factory      // This proxy will contain a populated GTO and overlay      // the appropriate interface on top of it.      PolicyDelegateInterface request =         new PolicyDelegateFactory.getPolicyDelegate(               PolicyDelegateFactory.AUTHENTICATION_ENFORCER);      // This proxy is specific for Authentication and has      // a method to retrieve the underlying GTO instance.      // Retrieve the BusinessDelegate using the standard      // technique outlined in the Core J2EE Patterns book      // [CJP2].      // BusinessDelegate delegate = ...      GenericTO results = delegate.execute(                            request.getGenericTO());      // ... Do something with the results. You can use the      // dynamic proxy to apply the appropriate interace.   }   catch (ApplicationException e){      e.printStackTrace();   } // ...

Security Factors and Risks

The Policy Delegate simply acts as a central controller of security invocations. It is intended to be a helper class that provides seamless access to security functionality exposed in the Business tier. It eliminates the risks usually associated with business developers attempting to implement or integrate with security services. The fewer security touchpoints, the less potential for security holes. The Policy Delegate address the following security factors:

  • Authentication. The Policy Delegate performs authentication through an Authentication Enforcer.

  • Authorization. The Policy Delegate performs authorization of requests through an Authorization Enforcer.

  • Logging. The Policy Delegate uses a Secure Logger to securely log request events.

  • Validation. The Policy Delegate may validate request data via an Intercepting Validator.

  • Confidentiality. The Policy Delegate relies on the underlying subsystems to provide confidentiality and data integrity.

Reality Check

Is Policy Delegate redundant? If the Web tier is already integrated with back-end security services in an implementation-specific manner without using Policy Delegate but using a Secure Service Façade, adding a Policy Delegate at that stage may not offer any benefit and will only cause rework. A thoughtful, careful design could avoid such scenarios.

Is the Policy Delegate interface too complex? If Policy Delegate usage becomes too complicated and requires too much knowledge of the underlying security framework by clients, it defeats the purpose of abstracting the complex logic in a simple helper as described in this pattern.

Related Patterns

Secure Base Action. Secure Base Action on the Web tier has a similar objective as the Policy Delegate on the Business tier. A Secure Base Action could in turn use a Policy Delegate to access security services.

Business Delegate [CJP2]. A Policy Delegate is similar to the Business Delegate pattern, but leverages other patterns discussed in this book related to security. Policy Delegate additionally makes use of a SecureSessionObject to protect the confidentiality and integrity of a client session.

Secure Service Façade

Problem

You need a secure gateway mandating and governing security on client requests, exposing a uniform, coarse-grained service interface over fine-grained, loosely coupled business services that mediates client requests to the appropriate services.

Having more access points in the Business tier leads to more opportunities for security holes. Every access point is then required to enforce all security requirementsfrom authentication and authorization to data validation and auditing. This becomes exacerbated in applications that have existing Business-tier services that are not secured.

Retrofitting security to security-unaware services is often difficult. Clients must not be made aware of the disparities between service implementations in terms of security requirements, message specifications, and other service-specific attributes. Offering a unified interface that couples the otherwise decoupled business services makes the design more comprehensible to clients and reduces the work involved in fulfilling client requests.

Forces
  • You want to off-load security implementations from individual service components and perform them in a centralized fashion so that security developers can focus on security implementation and business developers can focus on business components.

  • You want to impose and administer security rules on client requests that the service implementers are unaware of in order to ensure that authentication, authorization, validation, and auditing are properly performed on all services.

  • You want a framework to manage the life cycle of the security context between interactive service invocations by clients and to propagate the security context to appropriate servers where the services are implemented.

  • You want to reduce the coupling between fine-grained services but expose a unified aggregation of such services to the client through a simple interface that hides the complexities of interaction between individual services while enforcing all of the overall security requirements of each service.

  • You want to minimize the message exchange between the client and the services, storing the intermittent state and context on the server on behalf of the client instead.

Solution

Use a Secure Service Façade to mediate and centralize complex interactions between business components under a secure session.

Use a Secure Session Façade to integrate fine-grained, security-unaware service implementation and offer a unified, security-enabled interface to clients. The Secure Service Façade acts as a gateway where client requests are securely validated and routed to the appropriate service implementations, often maintaining and mediating the security and workflow context between interactive client requests and between fine-grained services that fulfill portions of the client requests.

Structure

Figure 10-18 illustrates a Secure Service Façade class diagram. The Façade is the endpoint exposed to the client and could be implemented as a stateful session bean or a servlet endpoint. It uses the security framework (implemented using other patterns) to perform security-related tasks applicable to the client request. The framework may request the client to present further credentials if the requested service mandates doing so and if those credentials were not found in the initial client request. The Façade then uses the Dynamic Service Management pattern to locate the appropriate service-provider implementations. The request is then forwarded to the individual services either sequentially, in parallel, or in any complex relationship order as specified in the request description.

Figure 10-18. Secure Service Façade class diagram


If the client request represents an aggregation of fine-grained services, the return messages from previous sequential service invocations can be aggregated and delivered to the subsequent service to achieve a sequential workflow-like implementation. If those fine-grained services are independent of each other, then they can be invoked in parallel and the results can be aggregated before delivering to the client, thus achieving parallel processing of the client request.

Participants and Responsibilities

Figure 10-19 depicts a sequence diagram for a typical Secure Service Façade implementation that corresponds to the structure description in the preceding section.

Figure 10-19. Secure Service Façade sequence diagram


  • Client. A client sends a request to perform a specific task with the appropriate service descriptors to the Secure Service Façade, optionally incorporating the decision-tree predicates to determine the sequence services to be invoked.

  • The Secure Service Façade deciphers the client request, verifies authentication, fulfills the request, and returns the results to the client. In doing so, it may use the following components:

    - Security Framework. The façade uses the existing enterprise-wide security framework implemented using other security patterns discussed in this book. Such a framework can be leveraged for authentication, authorization and access control, security assertions, trust management, and so forth. If the request is missing any credentials, the client request could be terminated or the client could be asked to furnish further credentials.

    - Dynamic Service Framework/Service Locator. The façade uses the Dynamic Service Framework or Service Locator to locate the services that are involved in fulfilling the request. The services could reside on the same host or be distributed throughout an enterprise. In either case, the façade ensures that the security context established using the security framework is correctly propagated to any service that expects such security attributes. The façade then establishes the execution logic and invokes each service in the correct order.

The fine-grained business services are not directly exposed to the client. The services themselves maintain loose coupling between each other and the façade. The façade takes the responsibility of unifying the individual services in the context of the client request. The service façade contains no business logic itself and therefore requires no protection.

Strategies

The Secure Service Façade manages the complex relationships between disparate participating business services, plugs in security to request fulfillment, and provides a high-level, coarse-grained abstraction to the client. The nature of such tasks opens up multiple choices for implementation flavors, two of which are briefly discussed now.

  • Façade with static relationships between individual service components. The relationship between participating fine-grained services is permanently static in nature. In such cases, the façade can be represented by an interface that corresponds to the aggregate of the services and can be implemented by a session bean that implements the interface. The session bean life cycle method Create can preprocess the request for security validations.

  • Façade with dynamic, transient relationships between individual service components. When the sequence of service calls to be invoked by the façade is dependent upon the prior invocation history in the execution sequence, the decision predicates can be specified in the request semantics and used in the façade implementations to determine the next service to be invoked. Such an implementation can be highly dynamic in nature, and the decision predicates can incorporate security class and compartment information to enable multilevel security in the façade implementation. A different flavor can use a simple interface in the façade, such as a command pattern implementation, and can mandate that the service descriptors be specified in the request message. This allows new services to be plugged-and-played without requiring changes to the façade interface and is widely used in Web services.

Consequences

The Secure Service Façade pattern protects the Business-tier services and business objects from attacks that circumvent the Web tier or Web Services tier. The Web tier and the Web Services tier are responsible for upfront authentication and access control. An attacker who has penetrated the network perimeter could circumvent these tiers and access the Business tier directly. The Secure Service Façade is responsible for protecting the Business tier by enforcing the security mechanisms established by the Web and Web Services tiers. By employing the Secure Service Façade pattern, developers and clients can benefit in the following ways:

  • Exposes a simplified, unified interface to a client. The Secure Service Façade shields the client from the complex interactions between the participating services by providing a single unified interface for service invocation. This brings the advantages of loose coupling between clients and fine-grained business services, centralized mediation, easier management, and reduces the risks of change management.

  • Off-loads security validations from lightweight services. Participating business services in a façade may be too lightweight to define security policies and incorporate security processing. Secure Service Façade off-loads such responsibility from business services and offers a centralized policy management and administration of centralized security processing tasks, thereby reducing code duplication and processing redundancies.

  • Centralizes policy administration. The centralized nature of the Secure Service Façade eases security policy administration by isolating it to a single location. Such centralization also makes it feasible to retrofit infrastructure security to otherwise security-unaware or existing services.

  • Centralizes transaction management and incorporates security attributes. As with a generic session façade, a Secure Service Façade allows applying distributed transaction management over individual transactions of the participating services. Since security attributes are accessible at the same place, transaction management can incorporate such security attributes, offering multilevel, security-driven transaction management.

  • Facilitates dynamic, rule-based service integration and invocation. As explained in the preceding "Strategies" section, multiple flavors of façade implementations offer a very dynamic and flexible integration of business services. Integration rules can incorporate security and message attributes in order to dynamically determine execution sequence. An external Business Rules Engine can also be plugged into such a dynamic façade.

  • Minimize message exchange between client and services. Secure Service Façade minimizes message exchange by caching the intermittent state and context on the server rather than on the client.

Sample Code

The sample code that follows illustrates a Stateful Session Bean approach to a Secure Service Façade implementation. Example 10-12 and Example 10-13 show the home and remote interfaces to the Façade Session bean.

Example 10-12. SecureServiceFaçade home interface
package com.csp.business; import java.rmi.*; import javax.ejb.*; import com.csp.*; public interface SecureServiceFacadeHome extends EJBHome {    public SecureServiceFacade create(SecurityContext ctx)       throws RemoteException,CreateException; }

Example 10-13. SecureServiceFaçade remote interface
package com.csp.business; import java.rmi.*; import javax.ejb.*; import com.csp.*; public interface SecureServiceFacade extends EJBObject {    public TransferObject execute(SecureMessage msg)       throws RemoteException;

Example 10-14 lists a sample bean implementation code. The important item to notice is that the SecurityContext object is maintained as a state variable in the stateful session bean in order to facilitate propagation of the context to any individual service that expects it. The SecureMessage encapsulates the aggregate service description of the client request and is used to locate the appropriate services and optionally establish a dynamic sequence of participating service executions.

Example 10-14. SecureSessionFaçadeSessionBean.java sample implementation
package com.csp.business; import java.rmi.*; import javax.ejb.*; import javax.naming.*; import java.util.*; import com.csp.*; public class SecureServiceFacadeSessionBean implements SessionBean {    private SessionContext context;    private SecurityContext securityContext;    // Remote references for the individual services    // can be encapsulated as facade attributes    // or made part of the message    private ServiceMaps services = new HashMap();    // Create the facade and initialize the security context    public void ejbCreate(SecurityContext ctx)         throws CreateException, ResourceException {       securityContext = ctx;    }    // Locate the requested service and cache for    // prospective future use and stickiness    private SecureMessage execute(SecureMessage msg)       throws SecureServiceFacadeException,          ServiceLocatorException {       SecureService svc = ServiceLocator.getService(          msg.getRequestedServiceName());       services.put(msg.getRequestedServiceName(), svc);       return svc.execute(msg);    }    // ...    // Other lifecycle methods    public void ejbActivate() { ... }    public void ejbPassivate() { ... }    public void setSessionContext(SessionContext ctx) { ... }    public void ejbRemove() { ... } }

Security Factors and Risks

The Secure Service Façade pattern is susceptible to code bloating if too much interaction logic is incorporated. However, this can be minimized by appropriate design of the façade using other common design patterns. As the gateway into the Business tier, the Secure Service Façade serves to limit the touchpoints between the Web and Web Services tiers and the Business tier. This means that there are fewer entry points that need to be secured and therefore fewer opportunities for security holes to be introduced.

The following security factors are addressed by the Secure Service Façade:

  • Authentication. The Secure Session Façade pattern authenticates requests coming into the Business tier. This is often necessary when clients connect directly to the Business tier through a remote interface or in cases where the Web tier cannot be trusted to perform authentication appropriately for the Business tier.

  • Auditing. The Secure Session Façade enables developers to insert auditing at the entry and exit points of the Business tier. This enables them to put an Audit Interceptor pattern, discussed earlier in this chapter, in place and decouple auditing from business logic while ensuring that no requests can be initiated without first being audited.

Reality Check

Does the Service Façade need to incorporate security? The Secure Service Proxy uses the existing security framework while aggregating fine-grained services. However, security context validation may not be required if other means of authentication and access control are pertinently enforced on the client request before it reaches the façade.

Does the Secure Service Façade need to perform service aggregation? If the client requests will mostly be fulfilled by a single, fine-grained service component, there is no necessity for aggregation. In such cases, Secure Service Proxy may well suit the purpose.

Does the Secure Service Façade reduce security code duplication? If security context validation is performed by each service component, the validation at the façade level may turn out to be redundant and wasteful. A planned design could reduce such duplication.

Related Patterns

Secure Service Proxy. Secure Service Proxy, implemented as a Web service endpoint, acts as a mediator between the clients and the J2EE components with a one-on-one mapping between proxy methods and remote methods of J2EE components. Secure Service Façade, on the other hand, maintains complex relationships between participating services and exposes an aggregated uniform interface to the client.

Session Façade. The Secure Service Façade and the generic Session Façade [CJP2] offer the same benefits with respect to business object integration and aggregation. However, Secure Service Façade does not require that the participating components are EJBs. The participating services could use any framework and the façade would incorporate the appropriate invocation logic to use those services. In addition, Secure Service Façade emphasizes the security context life cycle management and its propagation to appropriate services.

Secure Session Object

Problem

You need to facilitate distributed access and seamless propagation of security context and client sessions in a platform-independent and location-independent manner.

A multi-user, multi-application distributed system needs a mechanism to allow global accessibility to the security context associated with a client session and secure transmission of the context among the distributed applications, each with its own address space. While many choices are possible, the developer must design a standardized structure and interface to the security context. The security context propagation is essential within the application because it is the sole means of allowing different components within the application to verify that authentication and access control have been properly enforced. Otherwise, each component would need to enforce security and the user would wind up authenticating on each request. The Secure Session Object pattern serves this purpose.

Forces
  • You want to define a data structure for the security context that comprises authentication and authorization credentials so that application components can validate those credentials.

  • You want to define a token that can uniquely identify the security context to be shared between applications to retrieve the context, thereby enabling single sign-on between applications.

  • You want to abstract vendor-specific session management and distribution implementations.

  • You want to securely transmit the security context across virtual machines and address spaces when desired in order to retain the client's credentials outside of the initial request thread.

Solution

Use a Secure Session Object to abstract encapsulation of authentication and authorization credentials that can be passed across boundaries.

You often need to persist session data within a single session or between user sessions that span an indeterminate period of time. In a typical Web application, you could use cookies and URL rewriting to achieve session persistence, but there are security, performance, and network-utilization implications of doing so. Applications that store sensitive data in the session are often compelled to protect such data and prevent potential misuse by malicious code (a Trojan horse) or a user (a hacker). Malicious code could use reflection to retrieve private members of an object. Hackers could sniff the serialized session object while in transit and misuse the data. Developers could unknowingly use debug statements to print sensitive data in log files. Secure Session Object can ensure that sensitive information is not inadvertently exposed.

The Secure Session Object provides a means of encapsulating authentication and authorization information such as credentials, roles, and privileges, and using them for secure transport. This allows components across tiers or asynchronous messaging systems to verify that the originator of the request is authenticated and authorized for that particular service. It is intended that this serves as an abstract mechanism to encapsulate vendor-specific implementations. A Secure Session Object is an ideal way to share and transmit global security information associated with a client.

Structure

Figure 10-20 is a class diagram of the Secure Session Object.

Figure 10-20. Secure Session Object class diagram


Participants and Responsibilities

Figure 10-21 contains the sequence diagram and illustrates the interactions of the Secure Session Object.

Figure 10-21. Secure Session Object sequence diagram


Client. The Client sends a request to a Target resource. The Client receives a SecureSessionObject and stores it for submitting in subsequent requests.

SecureSessionObject. SecureSessionObject stores information regarding the client and its session, which can be validated by consumers to establish authentication and authorization of that client.

Target. The Target creates a SecureSessionObject. It then verifies the SecureSessionObject passed in on subsequent requests.

The Secure Session Object is implemented through the following steps:

1.

Client accesses a Target resource.

2.

Target creates a SecureSessionObject.

3.

Target serializes SecureSessionObject and returns it in response.

4.

Client needs to access Target again and serialize SecureSessionObject from the last request.

5.

Client accesses Target, passing the SecureSessionObject created previously in response to the request.

6.

Target receives the request and verifies the SecureSessionObject before completing the request.

Strategies

You can use a number of strategies to implement Secure Session Object. The first strategy is using a Transfer Object Member, which allows you to use Transfer Objects to exchange data across tiers. The second strategy is using an Interceptor, which is applicable when transferring data across remote endpoints, such as between tiers.

Transfer Object Member Strategy

In the Transfer Object Member strategy, the Secure Session Object is passed as a member of the more generic Transfer Object. This allows the target component to validate the Secure Session Object wherever data is passed using a Transfer Object. Because the Secure Session Object is contained within the Transfer Object, the existing interfaces don't require additional instances of the Secure Session Object. This keeps the interfaces from becoming brittle or inflexible and allows easy integration of the Secure Session Object into existing applications with established interfaces.

Figure 10-22 is a class diagram of the Secure Session Object pattern implemented using a Transfer Object Member strategy.

Figure 10-22. Transfer Object Member Strategy class diagram


Interceptor Strategy

In the Interceptor Strategy, which is mostly applicable to a distributed client-server model, the client and the server use appropriate interceptors to negotiate and instantiate a centrally managed Secure Session Object. This session object glues the client and server interceptors to enforce session security on the client-server communication. The client and the server interceptors perform the initial handshake to agree upon the security mechanisms for the session object.

The client authenticates to the server and retrieves a reference to the session object via a client interceptor. The reference could be as simple as a token or a remote object reference. After the client has authenticated itself, the server interceptor uses a session object factory to instantiate the Secure Session Object and returns the reference of the object to the client. The client and the server interceptors then exchange messages marshalled and unmarshalled according to the security context maintained in the Secure Session Object.

Figure 10-23 is a class diagram of the Secure Session Object pattern implemented using an Interceptor Strategy.

Figure 10-23. Interceptor Strategy class diagram


This strategy offers the ability to update or replace the security implementations in the interceptors independently of one another. Moreover, any change in the Secure Session Object implementation causes changes only in the interceptors instead of the whole application.

Consequences

The Secure Session Object prevents a form of session hijacking that could occur if session context is not propagated and therefore not checked in the Business tier. This happens when the Web tier is distributed from the Business tier. This also applies to message passing over JMS as well. The ramifications of not using a Secure Session Object are that impersonation attacks can take place from inside the perimeter. By employing the Secure Session Object pattern, developers benefit in the following ways:

  • Controlled access and common interface to sensitive information. The Secure Session Object encapsulates all sensitive information related to session management and communication establishment. It can then restrict access to such information, encrypt with complete autonomy, or even block access to information that is inappropriate to the rest of the application. A common interface serves all components that need access to the rest of the session data and offers an aggregate view of session information.

  • Optimized security processing. Since Secure Session Object can be reused over time, it minimizes repetition of security tasks such as authentication, secure connection establishment, and encryption and decryption of shared, static data.

  • Reduced network utilization and memory consumption. Centralizing management and access to a Secure Session Object via appropriate references and tokens minimizes the amount of session information exchanged between clients and servers. Memory utilization is also optimized by sharing security context between multiple components.

  • Abstract vendor-specific session management implementations. The Secure Session Object pattern provides a generic data structure for storing and retrieving vendor-specific session management information. This reduces the dependency on a particular vendor and promotes code evolution.

Sample Code

Example 10-15 shows sample code for Transfer Object Member strategy.

Example 10-15. SecureSessionTransferObject.java: Transfer Object member strategy implementation
package com.csp.business; public class SecureSessionTransferObject      implements java.io.Serializable {    private SecureSessionObject secureSessionObject;    public SecureSessionObject getSecureTransferObject() {       return secureSessionObject;    }    public void setSecureTransferObject(                SecureSessionObject secureSessionObject) {       this.secureSessionObject = secureSessionObject;    }    // Additional TransferObject methods... }

A developer can implement a SecureSessionTransferObject whenever they want to pass credentials within a Transfer Object.

Security Factors and Risks
  • Authentication. The Secure Session Object enforces authentication of clients requesting Business-tier components. Target components or interceptors for those components can validate the Secure Session Object passed in on request and therefore assure that the invoking client was properly authenticated.

  • Authorization. The Secure Session Object can enforce authorization on Business-tier clients as well. While it provides a coarse-grained level of authorization, just by being in the request or not it can be extended to include and enforce fine-grained authorization.

Reality Check

Is Secure Session Object too bloated? Abstracting all session information into a single composite object may increase the object size. Serializing and de-serializing such an object quite frequently degrades performance. In such cases, one could revisit the object design or serialization routines to alleviate the performance degradation.

Concurrency implications. Many components associated with the client session could be competing to update and read session data, which could lead to concurrency issues such as long wait times or deadlocks. A careful analysis of the possible scenarios is recommended.

Related Patterns

Transfer Object [CJP2]. Secure Service Proxy, implemented as a Web service endpoint, acts as a mediator between the clients and the J2EE components with a one-on-one mapping between proxy methods and remote methods of J2EE components. Secure Service Façade, on the other hand, maintains complex relationships between participating services and exposes an aggregated uniform interface to the client.

Session Façade. The Secure Service Façade and the generic Session Façade [CJP2] offer the same benefits with respect to business object integration and aggregation. However, Secure Service Façade does not require that the participating components be EJBs. The participating components may be plain old java objects (POJOs) or any other object.




Core Security Patterns. Best Practices and Strategies for J2EE, Web Services, and Identity Management
Core Security Patterns: Best Practices and Strategies for J2EE, Web Services, and Identity Management
ISBN: 0131463071
EAN: 2147483647
Year: 2005
Pages: 204

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net