Classification


Let's have a closer look at what an O/R Mapper is. We'll discuss this from two different angles. First, we'll discuss it from one dimension: different characteristics. Next, we'll discuss it from some of the PoEAA patterns that are implemented. We'll reuse those classifications when giving examples of how it all works in a specific O/R Mapper in the next chapter.

To be clear, everything I'm about to discuss is applicable even in the case of custom code (except for mapping style). That also goes for the patterns description. But in order to become more concrete, we will be thinking about O/R Mappers from now on.

The first classification is about the style of Domain Model that is supported.

Domain Model Style

How much, and in what way, do we have to adapt the Domain Model to work with the O/R Mapper? The three most typical and general aspects are

  • Persistent Ignorant

  • Inheritance

  • Interface implementation

Persistent Ignorant

Persistent Ignorant means that you make no changes at all to the Domain Model in order to make it persistable. Of course, there's a scale here, and it's not a matter of being completely black or white. For example, reflection-based approaches set some requirements. AOP-based approaches set other requirements.

Inheritance

A common approach historically is to require the Domain Model classes to inherit from a super class provided by the Persistence Framework.

Interface Implementation

Finally, another common approach historically is to require the Domain Model classes to implement one or more interfaces provided from the Persistence Framework.

This was just one aspect showing how you might have to adapt your Domain Model to fit the persistence framework. Loads of other things might have to be done regarding Versioning and Lazy Load, for example, but we'll come back to that later in the chapter.

Mapper Style

Metadata-based mappings are often implemented in two different ways:

  • Code generation

  • Framework/reflective

As the sharp-eyed reader will have noticed, I just mentioned code generation again, this time in the context of O/R Mappers. However, there's one important difference. Now the code generation is done, typically just before compilation, by reading metadata and spitting out the mapping code. Of course, code generation of custom code could be done in a similar way, but the main difference is that the appetite regarding what to support varies. This definition is not extremely distinct, but it's a start.

Framework/reflective means that there is no source code generated before compilation time for the mapping work. Instead, the mapping is dealt with by reading metadata at runtime.

Note

As you probably suspect, things are even fuzzier. For example, how should we categorize static AOP? I try to keep the description and categorization simple and clean.

It's also important to note that code generation and framework don't have to be mutually exclusive.


Starting Point

When you are working with a certain O/R Mapper, you can often choose one or more starting points from the following list:

  • Database schema

  • Domain Model

  • Mapping information

By starting point I mean what you focus on when you start building your application. Of course, it's beneficial if the O/R Mapper supports several starting points because there's then a greater chance you can use it for the different situations in which you will find yourself.

As you know by now, I prefer to start working from the Domain Model; however, you can't always have that and may have to start from the database schema instead or at least keep a close eye on the database schema. You may also have to start working from the mapping information instead, describing the classes' relationships to tables in a UML-like diagramming tool, for example.

If you prefer starting from the Domain Model, but the persistence framework that you want to try out doesn't support that, you can work around the problem by seeing the UML Editor or Database Design as a way of writing your Domain Model. It's a bit awkward, though. And TDD, for example, won't be as natural to apply.

Mapping Legacy Databases

Even though the starting point might be the database, this doesn't have to imply that it is forced by the persistent framework or that the designer prefers to work in this way. It can also be that you have a legacy database that must be used.

Be prepared for it to be much harder to go that route with an O/R Mapper (unless it's specialized for that), especially if you aren't allowed to change the design of the database at all. Often it's at least allowed to add views, columns, and tables to the database even if you aren't allowed to change existing columns. That can help! (Changes that existing apps aren't affected by are pretty likely to be allowed, but watch out because the existing apps might not be the most robust around.)

It's especially troublesome if the legacy database design isn't in great shape, which can happen if the application has been in production for a couple of years.


Something else about this point is how do you move further when you are "done" with the first one (such as the Domain Model)? Does the chosen O/R Mapper provide any help with creating the other two parts to any extent? For example, you have the Domain Model, and now you need the database and metadata. Can you get that automatically? It's not as though this is a show-stopper if it's not supported; you can often create such basic tools on your own pretty easily. But it's an advantage if that has been taken care of for you.

So let's now assume that we have the Domain Model, database, and mapping information in place. Let's discuss what the API will look like.

API Focus

The API focus comes in two alternatives:

  • Relational Tables

  • Domain Model

This is stretching things, because from the start the whole purpose of O/R Mappers was to let developers work with objects instead of tables. Therefore, for typical O/R Mappers, the API focus is always Domain Model. (An example of an API that is more focused on relational tables is that of the Recordset pattern, implemented with DataTable in .NET.)

Another very important aspect of the API, and one that isn't the same for every O/R Mapper, is querying.

Query Language Style

I find it quite hard to categorize this one, but let's give it a try. From the developer's perspective, the querying with the O/R Mapper is generally done in one or more of the following ways:

  • String-based, SQL-ish

  • Query Object-based

I believe it's fitting to explain each of these a little bit more.

String-Based, SQL-ish

By this I mean query languages that look quite similar to SQL, but work with classes in the Domain Model instead of the tables in the database. Some queries, especially advanced ones, are very well expressed in a language like this. On the other hand, the typical drawback is the lack of type safety.

It's important to note, though, that this way of stating queries isn't necessarily string-based at the implementation level, only at the API level.

Query Object-Based

The second typical query language is to use objects to represent queries, following the Query Object pattern [Fowler PoEAA]. Simple queries are often better expressed in this approach, it's possible to have type safety, and the developer doesn't have to know much about querying semantics in order to be efficient. The problem, though, is that complex queries might require lots of code to express, and the code quickly gets hard to read.

Raw SQL

Yet another nice solution to have for rare cases is to be able to express the query in SQL, but to receive the result as Domain Model instances. Of course, that has the problem of making your code database-coupled, but if you can't find another solution, it's great to at least have the option. This is very handy when you need to optimize. And if you need to get entities as the result and the SQL-integration can take care of that for you, it will cut down on the code amount you have to write.

Of course we should be allowed to jump out to SQL without receiving the result as Domain Model instances as well.

Which Approach?

As I see it, ideally the tool should support several approaches. This is because sometimes string-based expressions are best, while for other situations query object-based expressions are most suitable.

The querying style in itself is only part of the story. Another important aspect is how competent the querying solution is. That is part of the next topic.

Advanced Database Support

Or "How much will the DBA respect you?"

To be flexible enough for you not to have to jump out of the sandbox all the time, the O/R Mapper needs to support some database close operations that aren't obvious from the Domain Model perspective. However, they are great when the time comes for optimizations.

Some examples are

  • Aggregates

    A very basic feature of SQL is the capability to do things like SELECT SUM(x), MIN(y), MAX(z). When you work with a Domain Model focus, it might be slightly less common that you need those aggregates, but they are still very useful and sometimes necessary.

  • Ordering

    It might prove most efficient to order the result set in the database without having to do the ordering in the Domain Model.

  • Group By

    Something else you can do when you work with SQL is to use GROUP BY. It's especially common for ad-hoc queries and reporting, but is still occasionally useful in Domain Model-focused applications.

  • Scalar queries

    It's not necessarily the case that you always want to fetch complete instances back. Sometimes it's just field values, perhaps some from one class and some from another.

It's also very nice if there are other ways to move functionality to the database, functionality that fits best in the database, typically data-intensive operations. What I'm thinking about here is usually the capability to call stored procedures, but it could also include other custom ways of moving functionality, such as user-defined functions.

The main problem with this is that we might lose portability of the application and the database design. But if we use this as an optimization technique, used only when really necessary, the problem is minimized.

The other big problem is that if we jump out of the sandbox, we are on our own, but that's not all. We must also take care to do what it takes for the integration with the sandbox. It might be that we have to purge the Identity Map after we have called a stored procedure. Nobody can decide that for us; we have to judge it on our own.

Note

I think this is a good example of what Joel Spolsky talks about in his article "The Law of Leaky Abstractions" [Spolsky Leaky Abstractions]. Abstractions are great, but you have to know a lot about what's happening behind the scenes as well.

I think that's the way to think about and use O/R Mappers. You shouldn't expect them to hide everything. You have to know what's going on behind the abstraction. See the O/R Mapper as a tool that helps you with a tedious task that you could do manually, but practically don't want to.


Other Functionality

That was a whole range of features, but there's more; for example,

  • What back ends are supported?

    It might be a nice feature for you if the particular O/R Mapper supports several different backends so that you aren't just tied to one, such as Oracle or SQL Server.

  • Not just an O/R Mapper

    Many products that are O/R Mappers are also much more than that. For instance, they might help you out with data binding (to bind controls in your UI to your objects without any custom code), with business rules, or with bi-directionality management.

  • Advanced caching

    Advanced caching isn't necessarily important to you, but if you do need it, it's nice if it is supported by the chosen product. With advanced caching, I'm not just thinking about the cache in the form of the Identity Map, but also a cache that is used for querying. I mean that when you query for certain customers, for example, that query might execute against the second-level cache instead of touching the database.

  • Advanced support for controlling transactions

    Examples of this feature include how well you can control transactions regarding isolation levels, lessen the deadlock risk by avoiding escalations from read locks to write locks, and whether distributed transactions are supported. Pretty disparate things, but they might be pretty important, no question about that.

    Also interesting in this context is what technique is used for transaction control. Typical options are manual control, interception-based, and hiding the whole thing with a persistence manager.

  • Open source

    Pros and cons when it comes to open source versus commercial products are not in the scope of this book. Still, it's a factor to take into consideration when you're about to choose O/R Mapper.

  • Version (generation)

    And last on the list, it's important to consider what generation the O/R Mapper is. That might say something about what maturity to expect.

Keep stored in your head somewhere that "less is sometimes more." It might be that the bigger the product's focus, the less the product is focused on details. It doesn't have to be that way, of course; I just want to point out that it's not as easy to judge what product is best just by counting features.

Let's have a look at the classification from a totally different angle.




Applying Domain-Driven Design and Patterns(c) With Examples in C# and  .NET
Applying Domain-Driven Design and Patterns: With Examples in C# and .NET
ISBN: 0321268202
EAN: 2147483647
Year: 2006
Pages: 179
Authors: Jimmy Nilsson

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net