Mocks and Stubs

I said that we will talk about database testing later in the book. It's actually already time for the first incarnation of that. Here Claudio Perrone will discuss how you apply the technique of stubs and mocks in your testing. In the previous section, I talked about what is called state-based testing. Here Claudio will focus on interaction-based testing.

To not run in advance, Claudio will use a classic approach for his database-related examples before we really dive into using the Domain Model pattern in the next chapter.

Over to Claudio.

A Typical Unit Test

By Claudio Perrone

A common approach for testing the behavior of an object is to set it up with relevant context information, call one of its methods, and write a few assertions to check the return value or to verify that the method changed the state of the environment as expected.

The following example, which tests the SaveUser() method of the UserBC business component, illustrates this approach:

[Test, Rollback] //Automatic rollback                  //(using Services w/o components) public void TestUserBCCanSaveUser () {     // Setting up context information (preconditions)     IUserInfo user = new UserInfo(         "Claudio", "Perrone", "MyUniqueLogin", "MyPassword");     // I'm not testing yet, just clarifying initial state     Assert.IsTrue(user.IsNew);     Assert.AreEqual(NEW_OBJECT_ID, user.ID);     Assert.IsFalse(         IsUserInsertedInDataBase(         "Claudio", "Perrone", "MyUniqueLogin", "MyPassword"));    // Preconditions are ok - now exercising method to test    UserBC bcUser = new UserBC();    bcUser.SaveUser(user);    // Verifying post-conditions (state changed as expected)    Assert.IsFalse(user.IsNew);    Assert.IsTrue(user.ID != NEW_OBJECT_ID);    Assert.IsTrue(        IsUserInsertedInDataBase(        "Claudio", "Perrone", "MyUniqueLogin", "MyPassword")); }


The Rollback attribute is not provided with current NUnit yet (although it is in their roadmap). I used a modified version written by Roy Osherove to be found here:

Because the UserBC business component takes a business entity and saves it to the database, the test checks its behavior by creating the entity, passing it as a parameter to the SaveUser() method, and verifying that the entity is persisted to the data store.

Declaration of Independence

A potential problem with this testing style is that objects very rarely operate in isolation. Other objects that our object under test depends on often carry out part of the work.

To illustrate this issue with our example, we could implement the UserBC class as follows (for convenience, all exception-handling code is omitted):

public class UserBC {     public void SaveUser(IUserInfo user)     {         user.Validate();         UserDao daoUser = new UserDao();         if (user.IsNew)             daoUser.Insert(user);         else             daoUser.Update(user);      } }

In this case, UserBC delegates the user validation to the UserInfo business entity. If one or more business rules are broken, UserInfo will throw a custom exception containing a collection of validation errors. UserBC also delegates the persistence of the business entity to the UserDao data access object that has responsibility for abstracting all database implementation details.

As IUserInfo is an explicit parameter of the SaveUser() method, we can argue that UserBC explicitly declares a dependency on the IUserInfo interface (implemented by the UserInfo class). However, the dependency on UserDao is somewhat hidden inside the implementation of the UserBC class.

To complicate matters further, dependencies are transitive. If UserBC depends on UserInfo and, for example, UserInfo needs a set of BusinessRule objects to validate its content, UserBC depends on BusinessRule. A bug in a BusinessRule object could suddenly break our test and several others at the same time.

Indeed, the complex logic handled by a Domain Model is often implemented through a chain of objects that forward part of the behavior to other collaborating objects until the required result is created. Consequently, unit tests that aim at verifying the behavior of an object with many dependencies might fail if one of the objects in the chain has bugs. As a result, it is sometimes difficult to identify the cause of an error, as the tests are essentially small-scale integration tests, rather than "pure" unit tests.

Working with Difficult Team Members

There is another problem related to collaborators that is very common in this world of distributed applications. How can we test an object's behavior when the method we try to exercise depends on components or conditions that are difficult to recreate? For example, we might have components that

  • Have not yet been implemented

  • Are difficult to set up or test (for example, user interface components or messaging channels)

  • Are too slow (such as data access layer components, service agents, or distributed components)

  • Contain behavior that is difficult to reproduce (such as intermittent network connectivity or concurrency issues)

Replacing Collaborators with Testing Stubs

A viable solution to the previous problems is to replace some or all of the collaborators with testing stubs. A testing stub is a simulation of a real object used for testing purposes. It provides a mechanism to set up expected values to supply to the code being tested.

Let's go back to our example. In this case, we have two close collaborators in the SaveUser() methodUserInfo and UserDao. UserInfo is easy to substitute, as our test simply needs to provide an object that implements the IUserInfo interface and does not throw exceptions when its Validate() method is called (unless, of course, we want to test the behavior of the SaveUser() method when a validation exception occurs).

UserDao is much more difficult to replace because the construction of the object is embedded inside the UserBC class. We really want to substitute this collaborator, however, because its access to the database will slow down the execution of our tests. Additionally, checking values in the database is time consuming and possibly of value only when testing UserDao in isolation or within an integration test. One viable solution is to extract the interface from UserDao and add a constructor to the UserBC class that sets an explicit dependency on IUserDao.

The required code is very simple, with very low probability of introducing bugs. In some cases, you may even consider writing it in addition to the existing constructor(s) for testing purposes only.

public class UserBC {     private IUserDao _daoUser;     // Default constructor     public UserBC()     {        _daoUser = new UserDao();     }     // Constructor used by testing code     public UserBC(IUserDao daoUser)     {          _daoUser = daoUser;     }     public void SaveUser(IUserInfo user)     {         user.Validate();         if (user.IsNew)             _daoUser.Insert(user);         else             _daoUser.Update(user);     } }

Now we can easily create a couple of stubs that implement IUserInfo and IUserDao. Their implementation is trivial, so let's examine UserDaoStub only.

public class UserDaoStub : IUserDao {     private IUserInfo _userResult = null;     // Note: Test will need to set the expected value     // to be returned when Insert is called     public IUserInfo UserResult     {        get { return _userResult;}        set { _userResult = value;}     }     public void Insert(IUserInfo user)     {         // Note: Before calling this method our test will need to         // set up the UserResult property with the expected values         user.ID = UserResult.ID;         user.Name = UserResult.Name;         // etc     }     . . . }

There are a few important aspects to consider at this point.

First, our initial test was setting a couple of expectations about the state of the IUserInfo object as a result of calling the SaveUser() method. If we modify our test so that it uses a UserDaoStub object, we will also need to set up the expected result by setting the UserResult property before the Insert() method is executed.

A second aspect to consider is that the dependency on the database is now completely removed and our initial test is now much faster.

Because SaveUser() delegates most of its behavior, however, such a test provides virtually no value. Actually, on second thought, I would also add that we are committing the deadly (but unfortunately common) sin of putting assertions on values returned from our stubs. So we are effectively testing our stubs rather than UserBC!

On the other hand, it's a different story if we want to know how our UserBC class reacts when the data access layer throws an exception such as DalUnique-ConstraintExceptiona situation that could occur if the user login already exists on the database.

The Insert() method in the stub class now becomes

//UserDaoStub public void Insert(IUserInfo user) {  if (ThrowDalUniqueConstraintExceptionOnInsert)     throw new DalUniqueConstraintException();  user.ID = UserResult.ID;  user.Name = UserResult.Name;  // etc }

Assume we'd like the UserBC to catch the DalUniqueConstraintException and throw a proper business exception to maintain the abstraction of the layers. The following test illustrates this scenario:

[Test] // Note no database is needed anymore! [ExpectedException(             typeof(BusinessException),             "The provided login already exists.")] public void TestUserBCSaveUserThrowsBusinessException() {       // Setting stubs to remove all dependencies       IUserInfo user = new UserInfoStub (             "Claudio", "Perrone", "Login", "MyPassword");       UserDaoStub daoUser = new UserDaoStub();       daoUser.ThrowDalUniqueConstraintExceptionOnInsert = true;       // Executing test - it should throw a business exception       UserBC bcUser = new UserBC(daoUser);       bcUser.SaveUser(user); }

As you might expect, the test is very fast, doesn't require access to the database or other collaborators, and allows us to quickly verify the functionality of the class under test in a condition that is potentially hard to recreate. Simulating other conditions becomes a very simple exercise.

This brings us to an important lesson about stubs. Using testing stubs allow us to isolate the code under test and to observe how it reacts to the external conditions simulated by the faked collaborators.

Replacing Collaborators with Mock Objects

A notable variation to stubs is the concept of mock object. A mock object is a simulation of a real object. It replaces a collaborator and provides expected values to the code under test. In addition, it supplies a mechanism to set up expectations about how it should be used and can provide some self-validation based on those expectations.

To illustrate what this means, let's create a test that focuses on the interaction of UserBC with the collaborating objects. This time we will use a popular .NET mock objects framework called NMock [NMock]. A nice feature of this open source framework is that it uses reflection to generate mocks dynamically from existing classes or interfaces at run time.

[Test] public void TestUserBCSaveUserInteractsWell() {       // (1 - Setup) Create mocks dynamically based on interface       DynamicMock mockUser =           new NMock.DynamicMock(typeof(IUserInfo));       DynamicMock mockUserDao =           new DynamicMock(typeof(IUserDao));       // Set up canned values (same as stubs)       mockUser.SetupResult("FirstName", "Claudio",           typeof(string));       . . .       mockUser.SetupResult("ID", NEW_OBJECT_ID, typeof(int));       mockUser.SetupResult("IsNew", true, typeof(bool));       // Generate mock instances (need to cast)       IUserInfo user = (IUserInfo) mockUser.MockInstance;       IUserDao daoUser = (IUserDao) mockUserDao.MockInstance;       // (2 - Expectations) How we expect UserBC to deal with       //                    mocks       mockUser.Expect("Validate");       mockUserDao.Expect("Insert", user);       mockUserDao.ExpectNoCall("Update", typeof(IUserInfo));       // (3 - Execute) Executing method under test       UserBC bcUser = new UserBC(daoUser);       // Unexpected calls on the mocks will fail here       // (e.g. calling Validate twice)       bcUser.SaveUser(user);       // (4 - Verify) Checks that all expectations have been met       mockUser.Verify();       mockUserDao.Verify(); // Note: No need for assertions }


NMock either produces a class that implements an interface or generates a subclass of a real class. In both cases, we then use polymorphism to replace the real class with an instance of the generated class. Although really powerful, this approach presents some notable limitations. For example, it is not possible to create mocks of sealed classes, and mocked methods must be marked as virtual.

An interesting alternative to NMock is a commercial framework called POCMock [POCMock] from Pretty Objects. In this case, mocks are created by replacing collaborators statically.

As you can see, there is no need for assertions in our test because they are located inside the mock objects and are designed to ensure that the mocks are called as expected by the code under test. For example, calling Validate() twice would immediately throw an exception even before the call to Verify() is made.

This example leads us to the following observation: As mock objects verify whether they are used correctly, they allow us to obtain a finer understanding of how the object under test interacts with the collaborating objects.

Design Implications

When I first learned about mock objects, I thought that it was particularly tricky to come up with an easy mechanism to replace my collaborators. The fundamental problem was that my code was coupled with the concrete implementation of those collaborators rather than their interfaces. The "aha!" moment came when I discovered two key mechanisms called Dependency Injection and Service Locator.

The first principle, Dependency Injection, suggests that a class explicitly declares the interfaces of its collaborators (for example, in the constructor or as parameters in a method) but leaves the responsibility for the creation of their concrete implementation to the container. Because the class is not in control of the creation of its collaborators anymore, this principle is also known as Inversion of Control.

The second principle, Service Locator, means that a class internally locates its concrete collaborators through the dependency on another object (the locator). A simple example of a locator could be a factory object that uses a configuration file to load the required collaborators dynamically.


There will be much more coverage about Dependency Injection, Inversion of Control, and Service Locator in Chapter 10, "Design Techniques to Embrace."


Mock objects and stubs permit you to further isolate the code that needs to be tested. This is an advantage, but can also be a limitation, as tests can occasionally hide integration problems. While tests tend to be quicker, they are often coupled too closely with the system under test. Consequently, they tend to become obsolete as soon as a better implementation for the system under test is found.

Refactoring activities aimed at introducing mocks tend to decouple objects from a particular implementation of their dependencies. Although it is generally achievable to create mocks from classes containing virtual methods, it is usually recommended to use interfaces whenever possible. As a result, it is not rare to observe that systems designed to be tested using mock objects contain a significant number of interfaces introduced for testing purposes only.

Further Information

For a more in-depth discussion of the differences between mock objects and stubs (and state-based versus interaction-based testing), see Martin Fowler's "Mocks Aren't Stubs" [Fowler Mocks Aren't Stubs].

Thanks Claudio! We are strengthened with yet another tool; can we resist getting yet one more? Next up is refactoring.

Applying Domain-Driven Design and Patterns(c) With Examples in C# and  .NET
Applying Domain-Driven Design and Patterns: With Examples in C# and .NET
ISBN: 0321268202
EAN: 2147483647
Year: 2006
Pages: 179
Authors: Jimmy Nilsson

Similar book on Amazon © 2008-2017.
If you may any questions please contact us: