Let s Build the Fake Mechanism


Let's Build the Fake Mechanism

Let's move on in an interface-based way for a while, or at least for a start. I have already touched on a couple of the methods, but let's start from the beginning. A reduced version of the interface of the abstraction layer (which I earlier in the chapter already called IWorkspace) could look like this:

public interface IWorkspace {   object GetById(Type typeToGet, object idValue);   void MakePersistent(object o);   void PersistAll(); }


So far the whole interface is pretty straightforward. The first method is called GetById() and is used for reconstituting an object from the database. You say what type you expect and the identity value of the object.

The second method, called MakePersistent(), is used for associating new instances with the IWorkspace instance so that they will be persisted at next PersistAll(). Finally, PersistAll() is for persisting what is found in the Unit of Work into the database.

MakePersistent() isn't needed if you have read the instance from the database with GetById(), because then the instance is already associated with the Unit of Work.

So far I think you'll agree that the API is extremely simple, and I think it is very important in order to keep complexity down in this abstraction layer. OK, it's not all that competent yet, so we need to add more.

More Features of the Fake Mechanism

The first thing that springs to mind is that we need to deal with transactions as a very important concept, at least from a correctness standpoint. On the other hand, it's not something that is important for most UI-programmers. (We could swap "UI-programmers" for "UI-code" just as well.) What I mean is that I don't want to put the responsibility for transaction management on the UI-programmer because it's too much of a distraction for himand too important to be seen as a distraction. Still, I want adaptable transaction scope so that there isn't only a predefined set of possible transactions.

So my goals are pretty similar to those of declarative transactions in COM+, but I have chosen a pretty different API. Instead of setting attributes on the classes for describing whether they require transactions or not, I will just say that PersistAll() internally does all its work in an explicit transaction, even though you explicitly didn't ask for it

I know that on the face of it this feels overly simple to many old-timers. That goes for me as well, because I believe transaction handling is so important that I like to deal with it manually. If the goal is to be able to deal with something like 90% of the situations, however, I think PersistAll() could very well use an explicit transaction, and it's as simple as that.

Again, it sounds way too simplistic, and of course there are problems. One typical problem is logging. Assume that you log to the database server; you don't always want the logging operation to fail if the ordinary work fails. However, that's simple to deal with; you just use a separate workspace instance for the logging. If you want it to use the abstraction layer at all, the logging will probably just be implemented as a Service instead, which probably has nothing to do with the abstraction layer. As a matter of fact, there's a good chance that you will use a third-party product for logging, or perhaps something like log4net [Log4Net]. It's not something that will interfere with or be disturbed by the transaction API of the abstraction layer.

Another problem is that there might well be a need for the GetById() method to live in the same transaction as the upcoming PersistAll(). That won't happen by default, but if you want to force that, you can call the following method before GetById():

void BeginReadTransaction(TransactionIsolationLevel til)


To emphasize this even more, there is also an overload to GetById() to ask for an exclusive lock, but this comes with a warning tag. Make sure you know what you're doing when you use this! For example, there should be no user interaction whatsoever after BeginReadTransaction() or read with exclusive lock and before PersistAll().

But I digresswhat is important for the Fake? Because the Fake only targets single-user scenarios, the transactional semantics aren't very important, and for reasons of simplicity those will probably not be dealt with at all. Still, the consumer code can be written with transaction-handling in mind when the Fake is used, of course, so you don't have to change the code when it comes to swapping the Fake for the real infrastructure.

I hear the now very frightened experienced developer exclaim, "Hey, what happened to Rollback()?"

Well, the way I see it, it's not important to have rollback in the API. If PersistAll() is responsible for commit or rollback internally, what will happen then is that when the consumer gets the control back from PersistAll(), all changes or none have been persisted. (The consumer is notified about a rollback by an exception.)

The exception to this is when you are after BeginReadTransaction() and you then want to cancel. Then you call the following method:

void Clean()


It will roll back the ongoing transaction and will also clean the Unit of Work and the Identity Map. It's a good idea to use Clean() after a failed transaction because there will be no attempt at all in NWorkspace to roll back the changes in Domain Model instances. Sure, it depends upon what the problem with the failed transaction was, but the simple answer is to restart.

Some problems can lead to a retry within PersistAll(). In the case of a deadlock, for example, PersistAll() can retry a couple of times before deciding it was a failure. This is yet another thing that simplifies life for the consumer programmer so that she can focus on what is important to her, namely to create a good user experience, not following lots and lots of protocols.

Now we have talked a lot about the functionality that is important for NWorkspace, but not for the Fake version of NWorkspace. Let's get back on track and focus for a while on the Fake and its implementation instead.

The Implementation of the Fake

I'm not going to drag you through the details of the Fake implementationI'm just going to talk conceptually about how it's built, mostly in order to get a feeling for the basic idea and how it can be used.

The fake implementation uses two layers of Identity Maps. The first layer is pretty similar to ordinary Identity Maps in persistence frameworks, and it keeps track of all Entities that you have read within the current scenario. The second layer of Identity Maps is for simulating the persistent engine, so here the instances aren't kept on a scenario level, but on a global level (that is, the same set of Identity Maps for all scenarios).

So when you issue a call to GetById(), if the ID is found in the Identity Map for the requested type, there won't be a roundtrip to the database (or in case of the Fake, there won't be a jump to the second layer of Identity Maps). On the other hand, if the ID isn't found in the first layer of Identity Maps, it's fetched from the second layer, copied to the first layer, and then returned to the consumer.

The MakePersistent() is pretty simple; the instance is just associated with the first layer of Identity Maps. And when it's time for PersistAll(), all instances in the first layer are copied to the second layer. Simple and clean.

This describes the basic functionality. Still, it might be interesting to say a bit about what's troublesome, also. One example is that I don't want the Fake to influence the Domain Model in any way at all. If it does, we're back to square one, adding infrastructure-related distractions to the Domain Model, or even worse, Fake-related distractions.

One example of a problem is that I don't know which is the Identity field(s) of a class. In the case of the real infrastructure, it will probably know that by some metadata. I could read that same metadata in the Fake to find out, but then the Fake must know how to deal with (theoretically) several different metadata formats, and I definitely don't like that.

The simplistic solution I've adopted is to assume a property (or field) called Id. If the developer of the Domain Model has used another convention, it could be described to the Fake at instantiation of the FakeWorkspace.

Again, this was more information than you probably wanted now, but it leads us to the important fact that there are additional things that you can/ need to do in the instantiation phase of the Fake compared to the infrastructure implementations of NWorkspace.

To take another example, you can read from file/save to file like this:

//Some early consumer IWorkspace ws = new     NWorkspaceFake.FakeWorkspace("c:/temp/x.nworkspace"); //Do stuff... ((NWorkspaceFake.FakeWorkspace)ws).     PersistToFile("c:/temp/x.nworkspace");


We talked quite a lot about PI and the Fake mechanism in a way that might lead you to believe that you must go for a PI-supporting infrastructure later on if you choose to use something like the Fake mechanism now. This is not the case at all. It's not even true that non-PI-supporting infrastructure makes it harder for you to use TDD. It's traditionally the case, but not a must.

Speaking of TDD, has the Fake affected our unit tests much yet?

Affecting the Unit Tests

Nope, certainly not all tests will be affected. Most tests should be written with classes in the Domain Model in as isolated a way as possible, without a single call to Repositories. For instance, they should be written during development of all the logic that should typically be around in the Domain Model classes. Those tests aren't affected at all.

The unit tests that should deal with Repositories are affected, and in a positive way. It might be argued that these are more about integration testing, but it doesn't have to be that way. Repositories are units, too, and therefore tests on them are unit tests.

And even when you do integration testing with Repositories involved, it's nice to be able to write the tests early and to write them (and the Repositories) in a way so that it is possible to use them when you have infrastructure in place as well.

I think a nice goal is to get all the tests in good shape so that they can run both with the Fake mechanism and the infrastructure. That way you can execute with the Fake mechanism in daily work (for reasons of execution time) and execute with the infrastructure a couple of times a day and at the automatic builds at check in.

You can also work quite a long way without infrastructure in place. You must, of course, also think a bit about persistence, and especially for your first DDD projects, it's important to work iteratively from the beginning. But when you get more experience, delaying the addition of the persistence will give you the shortest development time in total and the cleanest code.

This also gives you the possibility of another refactoring rhythm, with more instant feedback whether you like the result or not. First, you get everything to work with the Fake (which is easier and faster than getting the whole thing, including the database, to the right level), and if you're happy then you proceed to get it all to work with the infrastructure. I believe the big win is that this will encourage you to be keener to do refactorings that would normally just be a pain, especially when you are unsure about the outcome. Now you can give them a try pretty easily.

Of course, trying to avoid code duplication is as important for unit tests as it is for the "real" code. (Well, at least close to as important. There's also a competing strive to "show it all" inline.) Therefore I only want to write the tests once, but be able to execute them both for the Fake and the real infrastructure when in place. (Please note that this only goes for some of the tests of course. Most of the tests aren't about the Repositories at all so it's important that you partition your tests for this aspect.)

Structure of the Repository-Related Tests

One way of approaching this is to write a base class for each set of tests that are Repository-affecting. Then I use the Template Method pattern for setting up an IWorkspace the way I want. It could look like this, taking the base class first:

[TestFixture] public abstract class CustomerRepositoryTestsBase {     private IWorkspace _ws;     private CustomerRepository _repository;     protected abstract IWorkspace _CreateWorkspace();     [SetUp]     public void SetUp()     {         _ws = _CreateWorkspace();         _repository = new CustomerRepository(_ws);     }     [TearDown]     public void TearDown()     {         _ws.Clean();     } }


Then the subclass, which looks like this:

public class CustomerRepositoryTestsFake :         CustomerRepositoryTestsBase {     protected override IWorkspace _CreateWorkspace()     {         return new FakeWorkspace("");     } }


OK, that was the plumbing for the tests related to Repositories, but what about the tests themselves? Well, there are several different styles from which to choose, but the one I prefer is to define as much as possible in the base class, while at the same time making it possible to decide in the subclass if a certain test should be implemented or not at the moment. I also want it to be impossible to forget to implement a test in the subclass. With those requirements in place, my favorite style looks like this (first, how a simplified test looks in the base class):

[Test] public virtual void CanAddCustomer() {    Customer c = new Customer();    c.Name = "Volvo";    c.Id = 42;    _repository.Add(c);    _ws.PersistAll();    _ws.Clean();    //Check    Customer c2 = _repository.GetById(c.Id);    Assert.AreEqual(c.Name, c2.Name);    //Clean up    _repository.Delete(c2);    _ws.PersistAll(); }


Note that the second level of Identity Maps isn't cleared when new FakeWork-space("") is done because the second level Identity Maps of the Fake are static and therefore not affected when the Fake instance is recreated. That's just how it is with a database, of course. Just because you open a new connection doesn't mean the Customers table is cleared.

So it's a good thing that the Fake works in this way, because then I will need to clean up after the tests with the Fake just as I will when I'm using the tests with a real database, if that's the approach I'm choosing for my database testing.

Of course, IWorkspace must have Delete() functionality, which I haven't discussed yet, otherwise it won't be possible to do the cleaning up. As a matter of fact, in all its simplicity the Delete() is quite interesting because it requires an Identity Map of its own for the Fake in the Unit of Work. Instances that have been registered for deletion will be held there until PersistAll(), when the deletion is done permanently.

To support this, IWorkspace will get a method like this:

void Delete(object o);


Unfortunately, it also introduces a new problem. What should happen to the relationships for the deleted object? That's not simple. Again, more metadata is needed to determine how far the delete should cascademetadata that is around for the real infrastructure, but not useful here. (The convention currently used for the Fake is to not cascade.)

OK, back to the tests. In the subclass, I typically have one of three choices when it comes to the CanAddCustomer() test. The first alternative is to do nothing, in which case I'm going to run the test as it's defined in the base class. This is hopefully what I want.

The second option should be used if you aren't supporting a specific test for the specific subclass for the time being. Then it looks like this in the subclass:

[Test, Ignore("Not supported yet...")] public override void CanAddCustomer() {}


This way, during test execution it will be clearly signaled that it's just a temporary ignore.

Finally, if you "never" plan to support the test in the subclass, you can write it like this in the subclass:

[Test] public override void CanAddCustomer() {   Console.WriteLine       ("CanAddCustomer() isn't supported by Fake."); }


OK, you still can forget a test if you do your best. You'd have to write code like this, skipping the Test-attribute:

public override void CanAddCustomer() {}


There's a solution to this, but I find the style I have shown you to be a good balance of amount of code and the risk of "forgetting" tests.

I know, this wasn't YAGNI, because right now we don't have any implementation other than the Fake implementation, but see this just as a quick indicator for what will happen later on.

Note

This style could just as well be used in the case of multiple implementations for each Repository.


For many applications, it might be close to impossible to deal with the whole system in this way, especially later on in the life cycle. For many applications, it's perhaps only useful for early development testing and early demos, but even so, if it helps with this, it's very nice. If it works all the way, it's even nicer.

In real life, the basics (there are always exceptions) are that I'm focusing on writing the core of the tests against the Fake implementation. I also write CRUD tests against the Fake implementation, but in those cases I also inherit to tests for using the database. That way, I test out the mapping details.

That said, no matter if you use something like the ideas of the abstraction layer or not, you will sooner or later run into the problems of database testing. I asked my friend Philip Nelson to write a section about it. Here goes.




Applying Domain-Driven Design and Patterns(c) With Examples in C# and  .NET
Applying Domain-Driven Design and Patterns: With Examples in C# and .NET
ISBN: 0321268202
EAN: 2147483647
Year: 2006
Pages: 179
Authors: Jimmy Nilsson

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net