Database Testing


By Philip Nelson

At some point when you are working with DDD and all the various flavors of automated testing, you will probably run into a desire to run a test that includes access to a database. When you first encounter this, it seems like no problem at all. Create a test database, point your application at the test database, and start testing.

Let's focus more on automated testing, so let's assume you wrote your first test with JUnit or NUnit. This test just loaded a User domain object from the database and verified that all the properties were set correctly. You can run this test over and over again and it works every time. Now you write a test to verify that you can update the fields, and because you know you have a particular object in the database, you update that.

The test works, but you have introduced a problem. The read test no longer works because you have just changed the underlying data, and it no longer matches the expectations of your first test. OK, that's no problem; these tests shouldn't share the same data, so you create a different User for the update test. Now both tests are independent of each other, and this is important: at all times you need to ensure that there are no residual effects between tests. Database-backed tests always have preconditions that you must meet. Somewhere in the process of writing your tests, you must account for the fact that you have to reset the database to meet the preconditions the test expects. More details on that to come.

At some point, you will have written tests for all the basic User object life cycle states, let's say create, read, update, and delete (CRUD). In your application, there may very well be other database-backed operations. For example, the User object may help enforce a policy of maximum failed login attempts. As you test the updating of the failed login attempts and last login time fields, you realize that your update actually isn't working. The update test didn't catch the problem because the database already had the field set to the expected value. As the smart person you are, you figured this out very quickly. However, young Joe down the hall has just spent four hours working on the problem. His coding skills aren't so much the problem as much as his understanding of how the code connects to the database and how to isolate the data in this test from all the other test data he now has. He just doesn't notice that the LastUpdateDate field is not reset between tests. After all, the code is designed to hide the database as an implementation detail, right?

At this point, you start to realize that just having separate test data for each test is going to be more complicated than you want it to be. You may or may not have understood as clearly as necessary just how important it was to reset the data between tests, but now you do. Fortunately, your xUnit test framework has just the thing for you. There is a setup and teardown code block that is just made for this sort of thing. But like most things in programming, there is more than one way to do it, and each has its strengths and weaknesses. You must understand the tradeoffs and determine the balance that suits you best.

I would categorize the techniques available to you in four ways:

  • Reset the database before each test.

  • Maintain the state of the database during the run.

  • Reset the data for just the test or set of tests you are running before the run.

  • Separate the testing of the unit from the testing of the call to the database.

Reset the Database Before Each Test

At first glance, this might seem the most desirable, but possibly the most time-consuming, option. The plus to this approach is that the whole system is in a known state at the start of the test. You don't have to worry about strange interactions during test runs because they all start at the same place. The downside is time. After you have gotten past the initial parts of your project, you will find yourself waiting while your test suite runs. If you are doing unit testing and running your tests after each change, this can very quickly become a significant part of your time. There are some options that can help. How they apply to you will depend on many things, from the type of architecture you use to the type of database system you use.

One simple but slow approach is to restore the database from backup before each test. With many database systems, this is not practical because of the amount of time it takes. However, there are some systems where it is possible. If you are programming against a file-based database, for example, you may only need to close your connections and copy a file to get back to the original state. Another possibility is an in-memory database, such as HSQL for Java, that can be restored very quickly. Even if you are using a more standard system like Oracle or SQL Server, if your design for data access is flexible enough, you may be able to switch to an in-memory or file-based database for your tests. This is especially true if you are using an O/R Mapper that takes the responsibility of building the actual SQL calls for you and knows multiple dialects.

The project DbUnit offers another way to reset a database before the test runs. Essentially, it's a framework for JUnit that allows you to define "data sets" that are loaded on clean tables before each test run. Ruby on Rails has a similar system that allows you to describe your test class data in the open YAML (YAML Ain't Markup Language) format and apply it during test setup. These tools can work well, but you may start running into problems as these database inserts may be logged operations that can be too slow. Another approach that could work for you is to use bulk load utilities that may not be as extensively logged. Here is how this might work for SQL Server. First, use a database recovery option on your test database of Simple Recovery. This eliminates most logging and improves performance. Then, during test design, do the following:

  • Insert data into a test database that will only act as a source of clean data for your tests

  • Call transact sql commands that export the test data to files

  • Write transact sql commands to do bulk insert of the data in these files

Then during the test setup method, do the following:

  • Truncate all your tables. This is not a logged operation and can be very fast.

  • Issue the bulk copy commands created earlier to load the test data into the test database.

A variation of this technique is to set up your test database and load the initial test data using normal data management techniques. Then, provided your database supports such an operation, detach the underlying data files from the server. Make a copy of these files as these will be the source of your clean test data. Then, write code that allows you to

  • Detach the server from its data files

  • Copy the clean test data over the actual server files

  • Attach the database to the copy

In many cases, this operation is very fast and can be run in the test fixture setup or, if the amount of data isn't too large, in the test setup itself.

Yet another variation supported by some of the O/R Mapping tools is to build the database schema from mapping information. Then you can use your Domain Model in test setup to populate data as needed for each test suite or possibly for each test fixture.

Maintain the State of the Database During the Run

You are probably thinking "But that is a data management exercise!", and you are correct. There is another technique that is even simpler to execute. Essentially, what you do is to run each test in a transaction and then roll back the transaction at the end of the test. This could be a little challenging because transactions are not often exposed on public interfaces, but again, this all depends on your architecture. For MS .NET environments, Roy Osherove came up with a very simple solution that draws on ADO.NET's support for COM+ transaction enlistment. What this tool does is allow you to put a [Rollback] attribute on specific test methods, allowing those methods to be run in their own COM+ transaction. When the test method finishes, the transaction is then rolled back automatically for you with no code on your part and independent of the tear-down functionality of the class.

What's really great about this technique is that your tests can be blissfully unaware of what's happening underneath them. This functionality has been packaged up in a project called XtUnit [XtUnit] and is an extension to the NUnit testing framework. This is by far the simplest approach to keeping your database in pristine shape. It does come with a price, though. Transactions are by their nature logged, which increases the execution time. COM+ transactions use the Distributed Transaction Coordinator, and distributed transactions are slower than local transactions. The combination can be brutal to the execution speed of your test suite.

Naturally, if you are testing transaction semantics themselves, you may have some additional problems with this technique. So depending on the specifics of your project, this can be a really great solution or a solution in need of a replacement as the number of tests grows. Fortunately, you would not necessarily have to rewrite much code should test speed become a problem because the solution is transparent to the test. You will either be able to live with it, or you will have to adopt other techniques as your project grows.

Reset the Data Used by a Test Before the Test

This approach relieves you and your system of having to reset the entire database before the test call. Instead, you issue a set of commands to the database before or after the test, again typically in the setup and/or teardown methods. The good news is that you have much less to do for a single class of tests. The previous techniques can still be used, but now you have the additional option of issuing simple insert, update, or delete statements as well. This is now less painful, even if logged, because the impact is less. Again, reducing the logging effort with database setup options is still a good idea if your system supports it.

There is a downside, though. As soon as you make the leap to assuming you know exactly what data will be affected by your test, you have tied yourself into understanding what not just the code under test is doing with data, but also what additional code the code under test calls out to. It may get changed without you noticing it. Tests may break that are not an indication of broken code. A bug will get filed that may not be a bug at all but is maintenance work just the same.

It is possible to use this technique successfully, but after many years of test writing, I have found this approach to be the most likely to break without good reason. You can minimize this, though. If you mock out the classes your code under test calls out to, that code won't affect the database. This is in spirit much closer to what is meant by a unit test. Mocking is not useful if you are automating system tests where the interactions between real classes are exactly what you want to test. At any rate, doing this requires an architecture decision to allow for easy substitution of external classes by the testing framework. If you are going to do that, you have naturally found yourself working toward the final option I am covering here.

Don't Forget Your Evolving Schema!

You will no doubt find yourself making changes to the database over time. While in the initial stages you may want to just modify the database directly and adjust your code as needed, after you have a released system, it's time to think about how these changes should become part of your process. Working with a test system actually makes this easier.

I prefer the approach of creating and running alter scripts against your database. You could run these scripts immediately after your database reset, but because all that database state is contained in your source controlled development environment, it probably makes sense to develop the scripts and use them to modify your test environment. When your tests pass, you check it all in and then have the scripts automatically set up to run against your QA environment as needed.

It's even better if this happens on your build server because the running of the alter scripts is then tested often before it's applied to your live systems. Of course, that assumes you regularly reset your QA environment to match your live environment. I'm sure there are many variations of this process, but the most important thing is to plan ahead to ensure a reliable release of both code and database changes.

Separate the Testing of the Unit from the Testing of the Call to the Database

Few things in the Agile community's mailing lists and forums generate more discussion than the testing of code that interacts with databases. Among those who are practicing (and still defining) the techniques of TDD, it is a highly held value that all code can be tested independently as units as they are written. In the spirit of TDD, it wouldn't make sense to write a test for a class and start out your implementation with a direct call to a database. Instead, the call is delegated out to a different object that abstracts the database, which is replaced with a stub or a mock as you write the test. Then you write the code that actually implements the database abstraction, and test that it calls the underlying data access objects correctly. Finally, you would write a few tests that test that your infrastructure that connects real database calls to your code works correctly. As the old saying goes, most problems in coding can be solved with another layer of indirection. As always, the devil's in the details.

First, let's clarify a few definitions. A stub is a replacement piece of code that does just enough to not cause problems for the caller and no more. A stub for a database call, such as a call to a JDBC Statement, would be accomplished by having a bare-bones class that implements the Statement's interface and simply returns a ResultSet when called, possibly ignoring the parameters passed and certainly not executing a database call.

A mock Statement would do all that and also allow the setting of expectations. I'll say more on that in a moment. Like a stub, the test would use the mock command instead of the real command, but when the test was complete, it would "ask" the mock command to "verify" that it was called correctly. The expectations for a mock Statement would be values for any parameters the call needed, the number of times that executeQuery was called, the correct SQL to be passed to the statement and so on. In other words, you tell the mock what to expect. Then you ask it to verify that it did receive these expectations after the test is complete.

When you think about it, I think you have to agree that a unit test for a User class should not have to concern itself with what the database does. So long as the class interacts with the database abstraction correctly, we can assume the database will do its part correctly or at least that tests to your data access code will verify that fact for you. You just have to verify that the code correctly passes all the fields to the database access code, and you have done all you need to do. If only it could always be that simple!

To take this to its logical conclusion, you might end up writing mock implementations of many of the classes in your code. You will have to populate the members and properties of those mocks with at least enough data to satisfy the requirements of your tests. That's a lot of code, and an argument could be made that just having to support that much code will make your code harder to maintain. Consider this alternative. There are some new frameworks available in a variety of languages that allow you to create mock objects from the definitions of real classes or interfaces. These dynamically created mocks allow you to set expectations, set expected return values and do verification without writing much code to mock the class itself. Known collectively as Dynamic Mocks, the technique allows you to simply pass in the class to mock to a framework and get back an object that will implement its interface.

There are many other sources of information on how to mock code, but much less on how to effectively mock database access code. Data access code tends to have all these moving parts to deal with: connections, command objects, transactions, results, and parameters. The data access libraries themselves have driven the creation of a wide variety of data access helpers whose aim it is to simplify this code. It seems that none of these tools, the helpers, or the underlying data access libraries like JDBC or ADO.NET were written with testing in mind. While many of these tools offer abstractions on the data access, it turns out to be fairly tricky work to mock all those moving parts. There is also that issue of having to test the mapping of all those fields between the data access code and rest of your classes. So here are some pieces of advice to help you through it.

Test everything you can without actually hitting your database access code. The data access code should do as little as you can get away with. In most cases, this should be CRUD. If you are able to test all the functionality of your classes without hitting the database, you can write database tests that only exercise these simpler CRUD calls. With the aid of helper methods you may be able to verify that the before and after set of fields match your expectations by using reflection to compare the objects rather than hand coding all the property tests. This can be especially helpful if you use an O/R Mapper, such as Hibernate, where the data access is very nicely hidden, but the mapping file itself needs to be tested. If all the other functionality of the class is verified without the database hit, you often only need to verify that the class's CRUD methods and the mapping are working correctly.

Test that you have called the data access code correctly separately from testing the database code itself. For example, if you have a User class that saves via a UserRepository, or perhaps a Data Access Layer in nTier terminology, all you need to test is that the UserRepository is called correctly by your upper-level classes. Tests for the UserRepository would test the CRUD functionality with the database.

To test certain types of data access, these simple CRUD tests may not be adequate. For example, calls to stored procedures that aren't directly related to class persistence fall in this category. At some point, you may need to test that these procedures are called correctly, or in your test you may need data back from one of these calls for the rest of your test. Here are some general techniques to consider in those cases where you really are going to mock JDBC, ADO.NET, or some other data access library directly.

You must use factories to create the data access objects and program against interfaces rather than concrete types. Frameworks may provide factories for you, as the Microsoft Data Access Application Block does in its more recent versions. However, you also need to be able to use these factories to create mock implementations of the data access classes, something not supported out of the box by many factory frameworks. If you have a factory that can be configured by your test code to provide mocks, you can substitute a mock version of these data access classes. Then you can verify virtually any type of database call. You still may need mock implementations of these classes, though. For ADO.NET, these can be obtained from a project called .NET Mock Objects [MockObjects]. Versions for other environments may also be available. The only combination of a factory-based framework that can work with mocks directly that I am aware of is the SnapDAL framework [SnapDAL] that builds on .NET Mock Objects to supply mock implementations of the ADO.NET generic classes and was built to fully support mock objects for the ADO.NET generic interfaces.

Whether you can use these frameworks or not depends on many factors, but one way or another, you will need your application to support a factory that can return a real or a mock of your data access classes. When you get to a position where you can create mock instances of your data access classes, you can now use some or all of the following techniques to test:

  • The mock returns result sets, acting as a stub for your code. The result set gets its data from the test code in standard mock object style.

  • The mock can return test data from files, such as XML files or spreadsheets of acceptance test data provided by your business units.

  • The mock can return test data from an alternate database or a more refined query than the real query under test.

  • If you can substitute a data access command, you can "record" the data access and later "play it back" from a mock object in your tests because you would have access to all the parameters and the returned results.

  • Mocks can have expectations verified of important things like the connection being opened and closed, transactions committed or rolled back, exceptions generated and caught, and data readers read, and so on.

I hope these ideas can get you started with an appreciation for the many possibilities there are for testing database-connected code. The text was written basically in order of complexity, with database resetting or the rollback techniques being the easiest to implement. Fully separating out the data access code from the rest of your code will always be a good move and will improve your testing success and test speed. At the most fine-grained level, allowing for mock versions of your actual database calls offers great flexibility in how you gather, provide, and organize test data. If you are working with legacy code that uses the lower-level data access libraries of your language, you would probably move toward this sort of approach. If you are starting a new application, you can probably start with the idea of simply resetting your database between tests and enhance your test structure as test speed begins to affect your productivity.

Thanks, Phil! Now we are armed to deal with the problem of database testing no matter what approach we choose.

Note

Neeraj Gupta commented on this by saying, "You can use the Flashback feature of Oracle database to bring the database back to the previous state for the testing."


We have come quite a long way, but there is one very big piece of the preparing for infrastructure puzzle that we haven't dealt with at all. How did I solve _GetNumberOfStoredCustomers?

What I'm missing is obviously querying!




Applying Domain-Driven Design and Patterns(c) With Examples in C# and  .NET
Applying Domain-Driven Design and Patterns: With Examples in C# and .NET
ISBN: 0321268202
EAN: 2147483647
Year: 2006
Pages: 179
Authors: Jimmy Nilsson

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net