Section 20.6. Hibernate Services


20.6. Hibernate Services

Hibernate provides a variety of services to you, whether through native Hibernate libraries or third-party libraries that you can download along with the core framework. For the purposes of this chapter, we'll examine just three: transaction management, caching support, and security. Many others are available, and you can learn about them in Hibernate's excellent online documentation.

20.6.1. Transactions

Most database applications require transactions in order to provide some level of assurance around updates to the tables. Hibernate provides a fully ACID-compliant local transaction manager for use with applications talking to a single database.[2] You can make use of the transaction provider with minimal effort, but it is often useful to be explicit about your use of transactions in your data code. This means making use of Hibernate's TRansaction object.

[2] For more information on transactions, see Chapter 16.

When beginning transactional work against a single database, you can explicitly begin a transaction within an open Session:

 Session session = factory.openSession(  ); Transaction tx = session.beginTransaction(  ); 

Remember that the Session object does not actually grab a JDBC connection from the pool until you make a call that requires data access. The call to beginTransaction forces the Session to retrieve a connection because a transaction is meaningless without a connection as context.

Once a transaction has been established, you can do your data work. Any usage of the Session to access the database between the call to beginTransaction and a call to transaction.commit( ) will be considered part of the local transaction; there is no need to associate each specific data operation with the transaction. When you have finished all the transactional work, call transaction.commit( ). If exceptions are thrown during the course of the method, you must explicitly roll back the transaction, close the Session, and handle your errors. The overall pattern looks like this:

 Session session = null; Transaction tx = null; try {     session = factory.openSession(  );     tx = session.beginTransaction(  );     Professor p = session.load(Professor.class, new Long(1));     UniversityClass class = new UniversityClass(  );     class.setName("English 333");     p.getClasses.add(class);     Student s = session.load(Student.class, new Long(4));     s.setSss("111-111-1111");     tx.commit(  ); } catch (Exception ex) {     if(null! = tx) tx.rollback(  );     // report error, etc. } finally {     if(null! = session) session.close(  ); } 

Local atomic transactions like this are scoped to a single connection. You should always be explicit about manually committing your transactions; never assume that calling Session.close( ) will result in the appropriate behavior, because whether or not a transaction closed in this way commits or rolls back is a function of your database and your connection provider.

Because the transaction is scoped to a single connection, we know that it cannot survive a call to Session.close( ). Likewise, transactions do not survive calls to Session.disconnect( ), either. Since disconnect drops the physical connection back into the pool and the connection is the containing context of a transaction, disconnecting the Session has the exact same consequences as closing the session out from under the transaction. Once again, you should explicitly commit or roll back the transaction before calling either disconnect or close.

The explicit transaction strategy is great for local atomic transactions, but useless in the face of a distributed transaction for several reasons:

  • A distributed transaction, by definition, spans multiple databases. A single Session object (wrapping a JDBC connection) connects only to a single data source. The explicit transaction object lives within a Session. Therefore, it cannot be used to talk to more than one database.

  • Distributed, managed transactions do not like it if you attempt to commit them prematurely. The container and the JTA decide when a transaction is to be committed, and calling tx.commit( ) would result in an exception being thrown.

  • Likewise, distributed transactions are not happy when explicitly rolled back. Again, the container and the JTA manage these decisions.

To use the JTA to take part in a distributed, managed transaction, you have to first replace the Hibernate local transaction manager with the JTA. This is accomplished by adding these configuration settings to hibernate.properties:

 hibernate.connection.datasource = java:/your/datasource hibernate.transaction.factory_class =     org.hibernate.transaction.JTATransactionFactory hibernate.transaction.manager_lookup_class =     org.hibernate.transaction.<Vendor>TRansactionManagerLookup 

In the preceding snippet, you would use the transaction lookup manager appropriate to your container by replacing <Vendor> with JBoss, WebLogic, or one of the other manager implementations provided with Hibernate.

Once the configuration is set, Hibernate relies on the container to manage transactions. Whenever Hibernate is called, it attempts to associate with the current global JTA transaction in effect. If there isn't one, it creates one for the length of the current call. As long as the container is around, your application can safely ignore all Transaction API calls. The JTA transaction either already exists or is created directly by Hibernate, so you do not have to call Session.beginTransaction( ). The transaction is committed by the JTA and container based entirely upon rules in the deployment descriptor, so you never have to call transaction.commit( ). To force a rollback of the transaction, you simply raise an exception that makes it back to the container. The container will roll back the transaction automatically, so you don't have to call tx.rollback( ). All you have to do is write code that accesses the database.

20.6.2. Caching

Beware of premature optimization. Too many times, development teams implement performance improvement strategies before ever measuring actual performance, or even finding out what an acceptable performance range might look like. If it weren't for the fact that most performance gains come at the expense of other, equally important concerns (simplicity, ease of maintenance, ease of testing), it wouldn't be a big problem.

Caching is an optimization technique; the idea behind caching is to prevent unnecessary trips to the database. If data is unlikely to change much, or is only read but never written or written but not read, you can save time and processing power by short-circuiting the round-trip before it gets started. Since database applications are almost always performance-bound to the network and I/O costs of hitting the database server, this is an appropriate first stop on the optimization path.

Hibernate provides two main kinds of caching: a Query cache and the more general second-level cache. The Query cache is built into the Hibernate framework, while the second-level cache is implemented in a third-party library, and there are several choices of implementation. Both strategies should be used carefully.

The problem with caching is that the application has to maintain a duplicate, local version of data in the central data store. Reads (or writes) happen to the local cache, which prevents the round-trip and the overhead of invoking the database APIs. The downside is that managing the lifecycle of the cache is complicated; what happens when the database changes? How should the cache be updated? What should happen if changes to the cache can't be pushed back onto the database? The extra layer of indirection is useful, but comes at the price of complexity.

20.6.2.1. The second-level cache

It turns out that, all along now, Hibernate has been busily caching data for you, without your knowledge. The Session itself is a cache of data. The state of each object, during the course of a transaction, is maintained in the Session. This makes sense. When you load an object in a Session, you wouldn't want to make a round-trip back to the database each time you access a property on the object. The Session caches those values for you. When the transaction is committed and the Session is closed, the cache is destroyed.

Sometimes you need a cache that spans the transaction boundary. When accessing data that is either infrequently or never updated, you may find it faster to load the data once, place it in the local cache, and access it from there for future needs. You'll want that data to stick around between Sessions and be accessible from any Session. Because of these requirements, you need something beyond the Session class itself to manage the cache.

The second-level cache is implemented in the EHCache library. You can replace it with any library that implements the Hibernate interface for cache management. Currently, appropriate libraries are EHCache, OSCache, SwarmCache, the JBoss TreeCache, Java Caching System or JCS (though it is deprecated now and will disappear altogether in future versions), and even a plain old hashtable, though this is frowned upon for anything but simple testing.

EHCache allows you to use an in-memory or disk-based cache. When you configure it, you can establish your own rules for what objects are added to the cache and when they are removed from it. The in-memory store uses the LinkedHashMap for actual storage if you are using JDK 1.4 or 5.0; otherwise, it uses the Apache Commons LRUMap. The disk storage version uses several well-known locations for the data to be persisted.

To enable EHCache, you have to set the global cache property, normally in the global configuration file, hibernate.properties:

 hibernate.cache.provider_class = net.sf.ehcache.hibernate.Provider 

After that, it's just a matter of configuring which objects, queries, and collections you want to cache. This sounds simple; remember, second-level caches are not notified by the database when changes have been made to the underlying store. The caches need to be expired intelligently if there are likely to be concurrency conflicts. Caches should be used for data that is seldom or never written.

To enable the cache for a given persistent class, add the appropriate caching directive to the mapping file. For example, you might imagine that the classes in the various departments use the same textbooks year in and year out (the pace of change in textbooks is glacially slow, after all), and this data could be considered relatively static. The UniversityClass would be extended to include a new textbooks property:

 private Set textbooks; // Accessor methods elided 

And a new class, TextBook, would be added:

 public class TextBook {     private String title;     private String author;     private String isbn;       // Accessor methods elided } 

The mapping for TextBook would mark it as a cacheable class; since the data is unlikely to change much, we could have a generous expiration limit on the cache and prevent lots of round-trips to the database:

 <hibernate-mapping package="com.oreilly.jent.hibernate">     <class name="TextBook">         <cache usage="read-only"/>           <!-- rest of mapping elided -->     </class> </hibernate-mapping> 

When specifying the caching strategy, you can use read-only, which means that the underlying data has no chance of being written to while the cache is in effect. Other strategies are read-write, for data that may change out from underneath the cache, and nonstrict-read-write.

The read-write and read-only cache strategies both use synchronized access to the cache, meaning that the cache is thread safe and cache consumers are guaranteed to get the most recent version of an object available in the cache. nonstrict-read-write is still threadsafe but does not guarantee that each consumer will get the most recent version of any given object. nonstrict-read-write is the most efficient cache strategy, but it requires more thought and planning for the developer who must anticipate retrieving stale objects from the cache.

20.6.2.2. The Query cache

The Query cache utilizes the second-level cache. Its job is to retain the results of a frequently run query. The full results are not cached, however. The Query cache maintains two separate physical caches: one for the ids of the returned results, and one for the timestamp of each id. When queries whose results have been cached are executed again, Hibernate retrieves the id cache and looks up the values in the second-level cache. This prevents wasted resources by preventing double storage of items, but it also means that you cannot just implement the Query cache without the second-level cache being enabled.

20.6.3. Security

Hibernate 3.0 introduces a new declarative security model for securing access to your persistent classes. Based on the Java Authorization Contract for Containers (JACC) and the Java Authentication and Authorization System (JAAS),[3] Hibernate now allows you to map user roles to actions you can perform on persistent classes:

[3] For more information on JACC and JAAS, see Chapter 10.

 <listener type="pre-delete"     /> <listener type="pre-update"     /> <listener type="pre-insert"     /> <listener type="pre-load"     /> 

Remember, once these are installed, you can't add any more listeners for these events. Now all that remains is to configure your access permissions. Permissions are established globally in the configuration file.

 <grant role="depthead" entity-name="Professor" actions="*"/> <grant role="professor" entity-name="Student" actions="read"/> 

In order for all this to work, your application must already support a JAAS login, and the roles you use in the configuration must map to the roles provisioned by the JAAS login module.



Java Enterprise in a Nutshell
Java Enterprise in a Nutshell (In a Nutshell (OReilly))
ISBN: 0596101422
EAN: 2147483647
Year: 2004
Pages: 269

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net