In this chapter, we develop a set of customer tests for retrieving recording information via the Web service we wrote in the previous chapter. The intent of the customer tests is different from that of the programmer tests. Customer tests confirm how the feature is supposed to work as experienced by the end user . Because most customers might not feel comfortable coding their tests in NUnit, we need to provide a tool that makes writing tests as easy as editing a document. The goal is to enable the customer to write a specification as a series of automated tests. The tool that we will use to facilitate the automation of customer tests is an open -source tool called FIT. You can download it from http://fit.c2.com .
This is an age-old problem in software development: many times, we have been on projects in which we thought we were 80 or 90 percent complete, only to have it take much more time than we thought it would to complete the last 10 or 20 percent. One of the main reasons for this problem is having ambiguous or contradictory requirements. We don t mean to imply that requirements in general are poorly written ”only that when things are written down by people and read by other people, there will be different interpretations of what is written. These different interpretations can lead you to believe that you are finished when you are not.
So, how do you drive ambiguity out of this process? Clearly, during the writing of the software, we used programmer tests to ensure that various programming- related assumptions were correct. However, the scope of these tests is such that they could be written perfectly with 100-percent coverage and still could miss a critical function. So, this is a necessary step, but it is not sufficient to indicate completion. Although we also worked very hard to eliminate code duplication in the implementation, this duplication issue is not something that the customer is primarily concerned with, at least at the start.
Programmer tests and good software structure are useful metrics from the programming perspective because they give the programmers an indication of how well the software implements its intended functionality. It is really up to the customer to indicate whether the software serves its intended purpose because he is primarily concerned with what the software does and how this particular functionality relates to a business value. So, the determination of completion is related to these values. What is lacking in many cases is a means for the customer to indicate with precision whether or not the software works as it was expected to.
What is needed then is a mechanism for the customer to write tests that are used to determine completion. These tests are a direct interpretation of the requirements, but they are written by the customer to ensure that their perspective is verified . Therefore, these tests should be in a form that is familiar to the customer, so this leaves out tests implemented with NUnit because most customers won t want to (or can t) write tests in a programming language.
Also, to facilitate the type of feedback that we get from programmer tests, these customer tests need to be automated to enhance reliability and improve response time. Rapid response time allows the development team to better associate a failure with the code change that caused it. For example, let s say it takes two weeks to run acceptance tests and get the results. Because it takes two weeks to do so, we will not do this very often. In fact, we will build up a list of changes to make the most appropriate usage of the testing resources. In addition, the development keeps moving when the tests are being run, so the software could be very different when the results are returned. If there were problems, the development team must now attempt to fix them. Some of the problems may have already been fixed; some may no longer be reproducible, and so on. For the sake of argument, let s say that it takes five minutes to run the customer tests and get the results. This time is so short that you want to add running these tests in addition to the programmer tests for each build of the software. Given this immediate feedback when a test fails, it is much more likely that the person who made the change in the software can go back and fix it without a great deal of time spent wondering what caused the problem. This rapid turnaround yields very large productivity gains for the team.