Section 3.7. Testing


3.7. Testing

Before we get any further, it's worth relaying a fundamental fact: testing web applications is hard.

There are two main types of application testing. The first is automated testing, an important portion of which is called regression testing. The second is manual testing, whereby a human uses the application and tries to find bugs. Hopefully at some point you'll have millions of people testing for you, but it can also be helpful to have planned testing, often referred to as Quality Assurance (QA) to avoid exposing bugs to your wider audience.

3.7.1. Regression Testing

Regression testing is designed to avoid regressing your application to a previous buggy state. When you have a bug to fix, you create a test case that currently fails, and then fix the bug so that the test case passes. Whenever you work on the same area of code, you can rerun the test after your changes to be sure that you haven't regressed to the original bug.

Automated regression testing requires a fairly closed system with defined outputs given a set of inputs. In a typical web application, the inputs and outputs of features as a whole are directed at the presentation and page logic layers. Any tests that rely on certain page logic have to be updated whenever the presentation or page logic layers are changed, even when the change in interaction has nothing to do with the bug itself. In a rapid development environment, the presentation and page logic layers can change so fast that keeping a test suite working can be a full-time jobin fact, you can easily spend more time maintaining a test suite than fixing real bugs or developing new features.

In a well-layered web application, automated testing belongs at the business logic layer, but as with the layers above, rapid changes can mean that you spend more time updating your tests than on regular development. Unless you have a very large development team and several people to dedicate to maintaining a full coverage test suite (that is, one that covers every line of code in your application), you're going to have to pick and choose areas to have test coverage.

When identifying areas for automated test coverage, bear in mind we're looking for closed systems with defined outputs given a set of inputs. These kinds of closed systems should always occur at the business logic leveldata storage functions that need to affect a number of different pieces of data, complex data processing functions and filters, or parsing code. Any unit of code that is complex enough for there to be plenty to go wrong should be tested in an easily defined way.

For instance, if your application includes some kind of complex parsing component, then it's probably a good candidate for an automated test suite. As you fix a bug, capture a set of inputs that demonstrates this issueone that causes an error before the bug is fixed, but not afterwards. As you build up a set of these input/output pairs, you build yourself a test suite. When you make any further modifications to the component, you can run through the expected input/output list and check to see if any of the previously passing tests fail.

3.7.2. Manual Testing

Of more relevance in general to web applications is manual testing (testing by a human). Testing performed by a human is already intrinsic to your application development process, whether by design or nota developer uses a feature as he builds it. For any given feature, it's a good idea to get as many pairs of eyes on it as possible before release. A good rule of thumb is at least two people: the developer responsible for the feature and someone else who's unfamiliar with it.

As well as in-process testing by the development team, a QA stage can be useful prior to a deployment. Formal test plans are the traditional way of performing manual testing. A formal test plan includes background information about the features being tested, a list of tests to be performed, and a set of use cases or test cases. Developing a formal test plan isn't always a good idea when developing web applications in small teams for a couple of reasons.

The first problem with test plans is that they take time, and time is usually the scarcest commodity when developing software applications. Your development schedule has slipped, time has been spent bug-fixing, you barely have enough time to test, and you want to release right away. A badly written test plan is not much better than none at all, and unless time can be spent correctly formulating a plan, it's not worth doing.

The second problem with test plans is that when testers are given a test plan, they tend to follow it. This doesn't sound so bad, but bear with me here. If your test plan involves adding an item to a cart and clicking the "check out" button, then the tester is going to perform that action. But what happens if a user clicks the "check out" button without adding an item to her cart? You're not going to find out because it's not in your test plan. Your manual-testing coverage is only going to be as good as the coverage in your formal test plan.

So how do you go about testing features rapidly without a formal test plan? It's not rocket science, but there are a few good general guidelines to help get you started and on the road to rapid-testing cycles that become so easy that your developers don't even realize they're performing them.


Identify main functions

The first step of a rapid-testing phase is to identify the main functions to be tested. The granularity of these functions depends on the scope of the product of features being tested, but might include such tasks as "registering a new account" or "posting a comment." Once the main tasks have been identified, give them a rough prioritization and work through them in turn following the two steps below.


Test ideal paths first

In the above example, I talked about adding a product to a cart and checking out. While the case of checking out with an empty cart needs testing, it's much more important to first test the actions that users are most likely to performthe ideal use path. By testing the actions you expect users to perform first, you can find the most problematic bugs up front. You want to avoid assigning developer time to fixing corner cases when the main common cases are also broken, and you want to identify the main cases as broken as soon as is possible.


Boundary testing

A common strategy when testing software is to test inputs using boundary conditions. In web applications, this usually translates to testing a set of inputs in sequence. First, we test using known good inputs to check they are accepted properly. Next, test bad values that you expect to throw particular errors. After known bad comes predictably bad input, in the form of data that you could expect users to get wrong, such as extra leading or trailing spaces, putting a name in an email address field, etc. Finally, test the extreme inputs, in which users enter something wildly diverging from what is expected (long strings of garbage, wrong data types, etc.). By testing in this order, you can uncover the important bugs quickly and make the most efficient use of your testing time.

For more advice and discussion regarding testing without test plans for web applications, as well as general web application QA discussion, visit Derek Sisson's http://www.philosophe.com.



Building Scalable Web Sites
Building Scalable Web Sites: Building, Scaling, and Optimizing the Next Generation of Web Applications
ISBN: 0596102356
EAN: 2147483647
Year: 2006
Pages: 119
Authors: Cal Henderson

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net