1.2 XP Testing


1.2 XP Testing

Although their importance is stressed in the test literature, unit tests have played a subordinate role for most developers. This situation changed (at least slightly) when Extreme Programming (XP) promoted the execution of component tests to a central activity within the XP development cycle. XP [Beck00a, Jeffries00] is a lightweight development process returning full control over the direction and change of direction in a project to the customer. The actual code writing moves to the center of the development activity. With this provocative shift of focus, XP scares off the advocates of detailed and sophisticated analysis and design methodologies. On the other hand, it wins over those many software developers who had felt bossed around by rather inappropriate and bureaucratic development processes.

Those interested in Extreme Programming as an overall complex and its relationship to heavyweight development processes will have to study the relevant literature and Web sites (see Bibliography and List of References, Further Reading). At this point, we will limit our discussion to a few central issues important for testing as described in this book.

Communication, Simplicity, Feedback, and Courage

Communication, simplicity, feedback, and courage are central values of XP and thus reflect in every piece of program code: in XP, code should be written so that it communicates all the things it contains. This requires particular care when naming classes and methods. Also, short methods with meaningful and expressive names can often replace long program comments and are less prone to becoming inconsistent in future changes to the program.

In addition, the program should be only as complex as required by the current functionality. This means in particular that we should do without considering any presumed future functionality. This is because XP assumes that if all central practices are observed, later changes will cost less than weaving presumed requirements into the design in advance, particularly because the majority of presumptions turn out to be wrong later on in every project.

Automated tests on several levels (discussed shortly) serve for quick feedback on whether or not our code does what it should do. Courage is needed by a team whenever changes have to be made to the system. Extensive tests will ensure that the task calls for courage rather than recklessness.

Pair Programming

XP requires that each piece of code to be taken to production be created jointly by two developers on a single computer. Ideally, one programmer focuses closely on the lines he or she currently types, while the other one focuses on the larger picture, and both change their roles constantly. Pair programming or programming in pairs is a kind of constant review, ensuring fewer errors, more consistency with the coding guidelines, and knowledge sharing across the entire team. The tests ensure that the pair will not lose focus.

While many managers have a feeling that this approach wastes resources, this fear is disproved in studies on the productivity of pair programming [Cockburn00a]. These studies have shown that a slightly smaller output of code is more than compensated for by better design and a clearly lower error rate.

Incremental and Iterative Development

Software development in XP does not occur en bloc, but in small steps. The entire system is created in iterations taking from one to three weeks. Each iteration is aimed at implementing one set of small user stories selected by the customer. The development team decomposes these user stories into tasks, small parts that can be completed by a developer pair within a few days. Again, these tasks are not implemented en bloc, but in small steps. Such a micro step includes both the implementation code and the test proving that the implementation does indeed do what it should do. Note that the implementation is not deemed to exist without this test.

Refactoring

Refactoring describes the constant restructuring of code all the way to the simplest design. The word simplest is based on the following criteria, and the order is important.

  1. All unit tests are running.

  2. The code communicates all of its design concepts.

  3. The code does not contain redundancy (duplicated code or logic).

  4. Under the above rules, the code contains the smallest possible number of classes and methods.

XP demands for constant refactoring, in particular after successful completion of a task. Frequent refactoring is hardly possible without automated unit tests, or there would be too big a risk of interfering with functioning components at one end when refactoring something at the other. This fear of unwanted side effects is a major reason why many developers seem to shrink from "cleaning up" apparently functioning components. In the long run, this leads to the unmanageable systems that all of us dread and have come across. It's not surprising then that the need for disciplined refactoring to control software evolution has long been recognized [Lehman85].

Martin Fowler [99] describes the most common refactoring measures, how to discover their necessity, [3] how to execute them step by step, and how unit tests facilitate refactoring. Nowadays many Java development environments (e.g., Eclipse and IntelliJ) provide built-in support for some automated basic refactorings. However, the correctness of a refactoring— signified by no change in the program's functional behavior—generally cannot be proven for more complex restructuring tasks.

Test Types in XP

Extreme Programming proclaims two types of software tests: acceptance tests and the previously introduced unit tests. While the same techniques and tools can basically be used for both types, they differ in their purposes and responsibilities.

  • Unit tests secure the developer's confidence in his or her own software and that of their colleagues. They are created together with the development code and then modified and completed as needed. Unit tests always have to be 100% successful. The word always means that, when integrating new code into the system, all tests created to this point will be executed. If only one single test fails, then we first have to fix this error before continuing with the integration. This is very important in XP, because continuous integration means that all edited code has to be incorporated into the overall system several times daily (see also Chapter 14, Section 14.2, Process Types and Testing Strategies).

  • Acceptance tests serve both the customer and the management team as a measurement for the progress of the entire project. Acceptance tests are specified by the customer; after all, it's the customer who has to believe in the test result. Acceptance tests normally specify the functionality of the overall system from the user's perspective. It is important to specify the majority of all test cases for a given run before an iteration begins. The percentage share of successful tests is determined at least once per day and is available to all interested parties.

    The job of turning a specification into automated executable test cases is normally assumed by developers. However, it would be conceivable to assign this job to a dedicated test team who would advise the customer during the specification phase and implement the tests [Crispin01]. In some cases it is possible to automate the acceptance tests similarly to the unit tests. Also, commercial test tools may be a meaningful approach. However, it is often recommended to develop a small framework that can form a basis to use the customer's specifications, for example in tabular form, directly as control files for the test runs [URL:WakeAT].

Both for unit tests and for acceptance tests, there is a requirement for full automation. The higher initial investment, compared to manual tests, will pay off after only a few runs. Unit tests are started innumerable times per day, so it would be unthinkable in practice not to automate them. On the other hand, when automating some types of acceptance tests, such as the user interface, we often find ourselves in all sorts of troubles. Before giving up and falling back into manual testing, however, consider that this would not only be more expensive over the long run but also more error-prone, because errors could sneak in during execution and verification. Literature dealing with test automation [Dustin99] may help in difficult cases.

XP or Not XP?

XP belongs to the group of agile software processes. [4] This means, among other things, that as few steps as possible are prescribed, but as many as necessary exist.

One thing is most important for the type of unit testing introduced in this book: there is no big, previously executed, detailed design phase, or Big Design Up Front (BDUF). The detailed software design, particularly the establishment of interfaces of single classes and their relationships to other classes, is part of the coding or test creation. This evolutionary design [Fowler00] approach largely contradicts the design and test methods described in the literature, where models and specifications of all components are prepared "up front" and used to derive test cases. XP's intentional renunciation of a dedicated design phase before implementation is probably the biggest shooting target for its critics. Meanwhile, there exists a considerable body of anecdotal evidence that evolutionary design can work [Little01] and some well founded arguments that the constant focus on design improvement is more important than a thorough, initial planned design [Fowler00]. Robert Martin [02a] even argues that continuous care will correct a poor initial design whereas lack of continuous care allows a good design to degrade over time.

In addition, there are certain standard practices in XP facilitating (pair programming, incremental development) and validating (acceptance tests) unit tests and in building on them (refactoring). This means that XP is not a prerequisite for unit testing, but it is worthwhile for every developer and project manager to think about it and see whether or not some aspect of XP could improve their test efforts and thus the quality of their software. In particular the combination of unit tests, pair programming, and refactoring recommends itself to that end and integrates well in almost every development process.

[3]Or better, how to smell it. By the way, a sign of code in need of improvement is called "code smell."

[4]Agile replaced the word lightweight with regard to software processes some time ago (see also [URL:AgileAlliance] and Chapter 14, Section 14.2).




Unit Testing in Java. How Tests Drive the Code
Unit Testing in Java: How Tests Drive the Code (The Morgan Kaufmann Series in Software Engineering and Programming)
ISBN: 1558608680
EAN: 2147483647
Year: 2003
Pages: 144
Authors: Johannes Link

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net