Background


For an extensive discussion of persistence strategies, focusing on O/R mapping strategies and DAO interface design, please refer to Chapter 10 of J2EE without EJB. In this section, we will give only a brief overview, to outline the basic concepts and clarify where the O/R mapping tools fit in terms of common O/R mapping concepts.

Basic O/R Mapping

Data entities in database tables are often mapped to persistent Java objects that make up a Domain Model, so that business logic can be implemented to work with these object representations rather than the database tables and fields directly. Object-relational mapping (O/R mapping) is the general term for such strategies: It aims to overcome the so-called Impedance Mismatch between object-oriented applications and relational databases.

Important 

The object-relational Impedance Mismatch is a popular term for the gulf between the relational model, which is based around normalized data in tables and has a well- defined mathematical basis, and the world of object-orientation, which is based on concepts such as classes, inheritance, and polymorphism.

In its simplest form, O/R mapping is about mapping JDBC query results to object representations and in turn mapping those object representations back to JDBC statement parameters (for example, for insert and update statements). Database columns are usually mapped to JavaBean properties or instance fields of domain objects.

This basic level can be achieved through custom JDBC usage, for example with the RowMapper interface of Spring's JDBC framework (see Chapter 5). The common pattern is to delegate the actual mapping to data mappers, such as a set of DAOs, to keep the persistence logic out of the domain model.

Beyond such basic table-to-object mapping, data mappers are often required to provide more sophisticated mapping capabilities, such as automatic fetching of associated objects, lazy loading, and caching of persistent objects. Once such requirements come in, it is preferable to adopt an existing O/R mapping solution instead of implementing your own mapper based on JDBC. There are very good tools available, which offer far more sophistication than custom in-house development can sensibly achieve.

iBATIS SQL Maps is a good example of a persistence solution working at the level described in the previous paragraphs. It offers reasonably sophisticated mapping capabilities, including support for associated objects, lazy loading, and caching. It still works at the SQL level: DAOs trigger the execution of so-called mapped statements, which are defined in an XML file — specifying SQL statements with parameter placeholders and result mapping. The tool never generates SQL statements; it rather relies on the application developer to specify the SQL for each operation.

The advantage of the SQL Maps strategy is that the developer is in full control over the SQL, which allows for full customization for a specific target database (by the application developer or a database administrator). The disadvantage is that there is no abstraction from database specifics like auto-increment columns, sequences, select for update, and so on. The mapped statements need to be defined for each target database if they are supposed to leverage such database-dependent features.

Object Query Languages

Full-blown O/R mapping solutions (or what most people refer to when they say "O/R mapping" without further qualification) usually do not work at the SQL level but rather feature their own object query language, which gets translated into SQL at runtime. The mapping information is usually kept in metadata, for example in XML files, which defines how to map each persistent class and its fields onto database tables and columns.

With such an abstraction level, there is no need to specify separate statements for select, insert, update, and delete. The tool will automatically generate the corresponding SQL from the same centralized mapping information. Database specifics are usually addressed by the tool rather than the application developer, through configuring an appropriate database "dialect." For example, the generation of IDs is usually configured in metadata and automatically translated into auto-increment columns or sequences or whatever identity generation strategy is chosen.

This level is provided by Hibernate, JDO, TopLink, and Apache OJB: sophisticated mapping capabilities with an object query language. Hibernate, JDO, TopLink, and OJB's ODMG API actually go beyond it in that they provide automatic change detection, while the OJB PersistenceBroker API purely provides the mapping plus object query level. The advantage of the latter approach is less complexity for the application developer to deal with. Storing of changes will happen only on explicit save, update, or delete, as with iBATIS SQL Maps — there is no automatic change detection "magic" working in the background.

The syntax of object query languages varies widely. Hibernate uses HQL, which is an SQL-like textual query language working at the class/field level, while JDO and the OJB PersistenceBroker use different flavors of query APIs with criteria expressions. The TopLink expression builder and Hibernate criteria queries are also API-based approaches. JDO 2.0 introduces the concept of textual querying, as an alternative to the classic JDOQL query API; OJB also has pluggable query facilities. The query language is still an important differentiator, as it is the most important "face to the application developer" aside from the mapping files.

Transparent Persistence

So-called transparent persistence tools do not just allow for mapped objects to be retrieved through object queries; they also keep track of all loaded objects and automatically detect changes that the application made. A flush will then synchronize the object state with the database state: that is, issue corresponding SQL statements to modify the affected database tables accordingly; this usually happens at transaction completion.

Such change detection just applies to changes made within the original transaction that loaded the objects. Once an object has been passed outside that transaction, it needs to be explicitly reattached to the new transaction. In Hibernate, this corresponds to a saveOrUpdate call; in JDO 2.0, to a reattach operation. Compare this behavior to tools such as iBATIS SQL Maps or the OJB PersistenceBroker, which never do automatic change detection and therefore always require explicit store calls, no matter if within the original transaction or outside of it.

Persistence operations often just deal with first-class domain objects; dependent objects are implicitly addressed via cascading updates and deletes (persistence by reachability). With such sophisticated operations, there is no need for explicit save or delete calls on dependent objects because the transparent persistence tool can automatically and efficiently handle this.

To perform such automatic change detection, the persistence tool needs to have a way to keep track of changes. This can either happen through snapshots made at loading time (as done by Hibernate) or through modifying the persistent classes to make the tool aware of modified fields — like JDO's traditional byte code modification strategy. This either incurs the memory overhead of snapshots or the additional compilation step for JDO byte code enhancement. It also incurs a somewhat strict lifecycle for persistent objects, as they behave differently within or outside a transaction.

The advantage of "transparent persistence" is that the application developer does not have to track any changes applied with the original transaction because the tool will automatically detect and store them. The disadvantage is that the persistence tool needs to have quite complex background machinery to perform such change detection, which the application developer has to be aware of. In particular, such change detection machinery introduces side effects in terms of lifecycle: for example, when the same persistent object should participate in multiple transactions.

When to Choose O/R Mapping

O/R mapping can have many benefits, but it is important to remember that not every application fits the O/R mapping paradigm.

Central issues are heavy use of set access and aggregate functions, and batch updates of many rows. If an application is mainly concerned with either of those — for example, a reporting application — and does not allow for a significant amount of caching in an object mapper, set-based relational access via Spring JDBC or iBATIS SQL Maps is probably the best choice.

Because all O/R mapping frameworks have a learning curve and setup cost, applications with very simple data access requirements are also often best to stick with JDBC-based solutions. Of course, if a team is already proficient with a particular O/R mapping framework, this concern may be less important.

Indicators that O/R mapping is appropriate are:

  • A typical load/edit/store workflow for domain objects: for example, load a product record, edit it, and synchronize the updated state with the database.

  • Objects may be possibly queried for in large sets but are updated and deleted individually.

  • A significant number of objects lend themselves to being cached aggressively (a "read-mostly" scenario, common in web applications).

  • There is a sufficiently natural mapping between domain objects and database tables and fields. This is, of course, not always easy to judge up front. Database views and triggers can sometimes be used to bridge the gap between the OO model and relational schema.

  • There are no unusual requirements in terms of custom SQL optimizations. Good O/R mapping solutions can issue efficient SQL in many cases, as with Hibernate's "dialect" support, but some SQL optimizations can be done only via a wholly relational paradigm.



Professional Java Development with the Spring Framework
Professional Java Development with the Spring Framework
ISBN: 0764574833
EAN: 2147483647
Year: 2003
Pages: 188

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net