Four Types of Testing


There are four basic types of tests on an XP project: acceptance tests, unit tests, performance or load tests, and hand testing or visual inspection. These last two types are not usually mentioned, but they are required on most projects. One distinction that XP projects have is well-defined ownership of the different test types. Acceptance tests and visual inspection tests belong to the customer, while unit tests and load tests belong to the developer. Ron Jeffries likes to highlight this ownership by renaming acceptance tests and unit tests as customer tests and programmer tests, respectively (see Chapter 7). We should note, however, that developers often support the customer's acceptance tests. Each type of test has a specific goal to achieve, but the ultimate goal of testing is confidence that things are going well and courage to make changes when necessary.

Acceptance Tests

The goal of acceptance tests is to check the functionality of the entire system as specified by the project's customer. At the system level, acceptance tests should not include specific knowledge about the internals of the system and are said to be "black box" tests. These tests touch the system only at predefined APIs or GUIs. To be effective, system-level tests often require large amounts of domain-specific data. The customer's involvement in creating this data and specifying system behavior is critical.

On an XP project, the customer is said to "own" the acceptance tests. This serves to highlight the idea that the customers must take time to ensure the acceptance tests are correct. Customers must specify valid and complete tests for the system that they want. Ron Jeffries has proposed the name "customer tests" as being more descriptive of the way acceptance tests need to be created and maintained. Of course, it is the entire team that is responsible for these tests. To create the acceptance tests, the customers require technical support from the quality assurance (QA) people, who may in turn require support from the developers.

Peculiar to an XP project is the idea that acceptance tests are created before the end of each iteration to test functionality that is created during that iteration. Acceptance testing of the software is never put off till just before production launch; it is an ongoing process. Each user story that is implemented will have one or more acceptance tests associated with it. These tests are used to determine when a user story has been successfully implemented at each step of the project. If the acceptance tests do not pass, the user story is considered unimplemented.

The customer is the final arbitrator and judge if the acceptance tests run at less than 100%. This can occur because acceptance tests are often highly biased toward unusual examples. That is, acceptance tests may contain a single typical case for every ten cases that test things that occur only occasionally. The customer is allowed to review the failed test cases and either declare success and release the product or hold the product back until specific cases are repaired. During planning, the customer also sets the relative priority of fixing acceptance tests versus developing new functionality.

Keyboard-capture-and-replay test utilities are often employed to automatically press buttons on the GUI and compare the results. This is a quick and easy solution but is not usually the optimal solution. It is best to implement a strict Model-View-Controller (MVC) architecture to minimize the amount of button pressing required. GUIs that are kept simple require little testing. Make GUIs trivially simple by pushing functionality into the control layer, where it can be tested directly. Testing the system by interfacing directly with the control layer makes the acceptance tests more robust in the long term.

Overall, the acceptance tests inform us when the customer's functionality is working, regression test functionality that has already been finished, and enable the release to production. That is, acceptance tests document and verify system requirements, guard against bugs creeping into the system, and give us confidence to put our system into production on a regular basis. Without acceptance tests, fixing one bug can often cause another bug, and releasing to production is often postponed or attempted only irregularly. On an XP project, the current system is always in a state that is ready to move from development to production.

Unit Tests

The goal of unit testing is to ensure that a particular module does exactly what the programmer intended it to do. Unit tests are "white box" tests. That is, a unit test can be created with knowledge of the unit's implementation. Unit tests are also allowed to call methods not in the public API. One might even go so far as to make a method public just to allow its use in a test. Isolating small units of functionality for validation is what unit testing is all about.

What is a unit, anyway? Most often, a unit is identified as a class or an object. But this tells us nothing when we are not using an object-oriented language, and is not entirely accurate. A unit is essentially one idea, one focal point. Most classes have a reason for existence; the basic idea for the class to exist is then implemented by supporting methods. This is a unit. This one-idea rule also applies to nonobject languages. A unit is a specific idea and all the code that supports it. "Function" is intentionally avoided because that can be misunderstood. Several functions might implement a single idea.

On an XP project, the developer creates the unit tests. Ron Jeffries has proposed that these tests be called programmer tests to reflect this philosophy and responsibility. Many organizations provide separate testers or QA experts to do the unit testing in a "throw it over the wall" style. This works, but within the XP methodology, unit testing is an integral part of the development process and so must be the responsibility of the developers.

Unit tests should always run and score 100% passed. If all the tests always run, then whenever a test fails, it indicates that a problem has just been introduced by the development pair. Unit tests guard functionality in the environment of collective ownership. One of the biggest fears encountered while contemplating collective ownership is that code will be changed by someone who introduces a bug. This doesn't happen if the unit tests have proper coverage. Unit tests also ensure quick and accurate integration. We can always assume that if a unit test fails during integration, the cause of that failure is some incompatibility of code. We can also assume that fixing that test indicates that the integration is now correct. Fixing a test is everyone's responsibility, and whoever breaks a test fixes it. The implications of this simple rule are enormous.

Unit tests enable refactoring boldly and mercilessly. Without unit tests, refactoring must be carried out in a robotic style, with transformations being applied and, hopefully, no mistakes introduced. In the presence of good unit test coverage, refactoring can be carried out with confidence. Large refactorings can be tried out to see just how extensive they are. That is, you can try changing something and reviewing the unit test failures to decide whether you wish to continue and complete the refactoring. If you see from the tests that too much will need to be changed, you can back out the change before too much effort has been invested. This enables exploring very bold refactorings without risk and waste.

In the context of XP, unit tests and code are created together in a symbiotic relationship not seen elsewhere. In fact, the preferred method is to create one unit test at a time and then create code to make that one test pass. There is a rhythm to this method. Test fails, test passes, test fails, test passes … When unit testing is done this way, it becomes natural to create the test first, and it ensures that all code will have tests. One of the biggest impacts, though, is that all code created is readily testable. In many ways, the tests are more valuable than the system code. Take extra care when changing test code because a failure has occurred, and never change both test code and system code at the same time. Take small steps during development, allow the tests and code to grow slowly together, and never leave any tests broken.

Unit tests are created to run in a simple framework such as JUnit.One thing to be very suspicious of is unit-testing tools that automatically create the unit tests.Generally,such tools are useless in an environment where the unit tests are created before the code.And more specifically,these tools do not create the harder tests.The 80-20 rule applies here.Automatic creation of unit tests will create the tests that take only 20%of your time.The tests that require 80%of your time will still need to be created with handcrafted code.

If done well,unit tests can keep you out of the debugger,which is a huge time-saver.Unit tests should be formulated to be independent of each other.Each test should set up the state of the system to be exactly what is required for the test to be valid and should not depend on some side effect from a test run before the current test.This is important for debugging because independent tests point to specific areas of failure, while dependent tests can cause a cascade of failures,thus masking the real problem.By creating tests so that it is obvious where the problem lies,it becomes possible to review test failures quickly to gain insight into a problem and fix that problem without the overhead of single stepping into the code.With good coverage and independent unit tests, it becomes much easier to locate and repair problems.

Although unit tests are very important,they are not more important than acceptance tests.It is easy for developers to get carried away with creating the unit tests and ignore the customer 's acceptance tests.Both sets of automated tests are of equal mportance,and both sets of tests require their own specialized framework.The symmetry of Ron Jeffries ' new names,customer tests and programmer tests,should not be over-looked.

Performance Tests

The goal of performance testing is to help quantify the performance and capabilities of the software system. Performance tests may also be called load tests, depending on context and the measurements being made. Developers or the QA department may create these tests to verify specific speed or load requirements of the system. Performance tests are a direct incarnation of the feedback value in that they measure the system directly and give qualitative results about performance.

Performance tests are required to determine whether something needs to be optimized. Trying to optimize without feedback is not a reliable way to optimize. It is easy to optimize in a penny-wise, pound-foolish way. People who claim that something must be coded one and only one way for performance sake are often shown to be wrong when a measurement is made. Having feedback to quantify improvement is critical to quickly zero in on what needs to be optimized.

By-Hand Testing

The goal of "by hand" tests is generally to test GUIs. Usually, such tests are expressed as scripts to be executed by a person clicking on the system's GUI. These tests are created and maintained by the QA department or the customer. By-hand tests are, of course, black-box tests. Because these types of tests are time-consuming to run, by-hand testing must be limited to things that are not likely to break and do not need to be tested often. In other words, by-hand tests should be kept to an absolute minimum.

Visual inspection should be used only where appropriate. Your eyes and brain are highly specialized for pattern recognition. You must consider this specialization when deciding what can be tested by hand and what cannot. Determining whether a GUI is laid out correctly is the domain of the human eye and not the computer. On the other hand, comparing long columns of numbers is the domain of the computer and should not be taken on by hand. Comparing a spreadsheet full of expected results with actual run numbers is not an adequate form of testing. You should automate these types of tests and use by-hand (or more correctly, "by eye") testing only when appropriate.



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net