14.1 Activities in the Defined Software Process


14.1 Activities in the Defined Software Process

A defined development process consists of a structured sequence of activities, with intermediate results and eventually the product.

Activities and Products

Figure 14.1 shows an example outlining general activities and results of a software process. We have intentionally not selected one of the processes discussed in the following sections; instead, we will use a simplified example, a small company developing customer-specific GUI components.

click to expand
Figure 14.1: Activities, intermediate results, and products in the development process.

The development is done as follows: The first step describes the functions of the component, the programming interface, and the user interface. This description is reviewed by the customer prior to beginning the implementation. The next step involves the design of the class model, the tests, and the implementation of the component. The component is tested first internally and later by the customer, where it is integrated into the customer's software. Finally, the finished component is deployed and the project is completed.

We also use this example to explain a few basic terms: A process consists of a set of steps or activities. The realization of an activity is described in the process and leads to a defined result. The rectangles in Figure 14.1 are activities, such as, Describe function and interface. The result of an activity is either a product (i.e., something that will be delivered to the project customer) or an internal, intermediate result (e.g., a project document or a software model).

Defined intermediate results allow us to determine the project's progress. These intermediate results are shown by parallelograms in Figure 14.1. Such intermediate results are often standardized documents. One example of such a document is the Specification. The benefit of text documents is that we can print and read them, which makes them very suitable for later review. In RUP [Kruchten99], intermediate results are software models (like UML Design in Figure 14.1) or executable programs (like the component after the implementation phase in Figure 14.1). Executable intermediate results can represent a more reliable proof of a project's progress than paper-based documentation. Of course, this is only true if we are dealing with finished and tested parts of the product and not with GUI prototypes that can be quickly created by a developer with the appropriate tools.

The objective of a software process is to create a high-quality software product. High quality means that the product meets the documented requirements and the (often) undocumented needs and expectations of customers and users. A variety of activities contribute to the creation of a high-quality product.

Construction Activities

Activities like the definition, design, and implementation of requirements serve to build the product's functionality. Errors occur in the course of these activities. The verification and validation steps described further in the sections that follow serve to remove these errors.

The cheapest and best way to achieve high quality is to avoid errors in the constructive steps. There are various means to this end. Document templates help the project team members to fully document all requirements. Sound knowledge—and more important, the practice—of software engineering methods helps avoid systematic errors. And finally, the testfirst approach contributes to avoiding errors because it improves the design quality.

Verification

Verification helps to determine whether or not the results of an activity meet the documented requirements. More specifically, we can determine whether or not a design reflects all documented requirements or the product implements all specified functions correctly. Various testing and reviewing techniques are normally used in the verification phase.

A test means that we run the software in a defined way, while watching or recording the results and evaluating the correctness of these results. There are different types of tests:

  • Function tests verify that the software implements a specified function correctly, whether or not it is possible to add rows and columns to a table, for instance.

  • Benchmark tests measure the performance of a system (defined hardware and software) and compare it with a reference, namely, an existing system or a given value. For example, a benchmark test for a graphic program determines how long it takes to load a 3D scene on a specific computer.

  • Load tests verify whether or not the software works properly and efficiently under various operating conditions (e.g., a different number of parallel users). A variant of the load test is the stress test, which checks for the behavior of the software under extreme conditions, for example, a very large number of parallel users or under very limited computer resources.

  • Robustness tests verify whether or not the software can react to errors without crashing, say, due to faulty inputs or exceptions or insufficient storage capacity.

  • Installation tests verify whether or not the software can be properly installed under different conditions.

A review means that a group of reviewers examines the result of an activity, such as a documented requirement or an important class definition. Such a group can consist of various members, including corporate managers, customer representatives, users, or developers. If a structured review process is followed (e.g., inspections [Gilb93]), then a review can often find errors more economically than tests. One reason this is so is that a reviewer finds an error when reading a document or source code at the location where the error was caused—when a reviewer sees a wrong variable assignment in the source code, for instance. In contrast, when a test finds a symptom, the developer has to go through a time-consuming debugging process to find the cause [Humphrey95]. Another positive side effect of reviews is the transfer of knowledge. Through their work as reviewers, project members learn new parts of the project. One reviewer remarked that reviewing the JUnit assertions together with an analyst or customer has helped tremendously to discover missing and wrong test cases.

In addition to various testing types, XP also includes a very efficient form of reviewing. In pair programming (see Chapter 1, Section 1.2), developers review their developing work alternately and mutually. Pair programming reduces the number of errors and distributes knowledge about the code across the project team.

Verification techniques check for correct implementation of the software process and whether or not the specified result was produced in each phase. The developers have completed their work successfully if a software product passes the verification phase successfully. This means that unit tests are a verification measure. They ensure that the developer has correctly implemented all methods supplied by the component in its public interface.

Validation

Unfortunately, successful verification is no guarantee for the successful use of a software product, because the specification may not necessarily describe the customer's real wishes. For this reason, we need validation as an additional step. Validation serves to determine whether or not the product meets the user's requirements. Validation includes using the software within the target environment, in a pilot project or a beta test program in the case of standard products. Validation should also be used as early as possible.

For example, an early and very effective form of validation consists of agreeing on a common understanding of the requirements between the project team and the customer. The software process models discussed next use a set of techniques—scenarios, prototypes or storyboards—to this effect.

Quality Assurance

One might assume that quality assurance is an issue already covered by the verification and validation activities just mentioned. In the real world, this would be too optimistic a stance. Just because there is a written development process does not mean that it will be observed by the project members. Good intentions of doing the verification and validation activities are often defeated by short deadlines or the general lack of sexiness of such activities in the project routine. When project members do not see the process as being appropriate, for whatever reasons, they will move slowly but surely away from what's on paper toward a chaotic way of working, so that the pretty development process deteriorates to "shelfware."

Quality assurance is used to prevent this from happening and to ensure that both developers and corporate managers see how and at what level of quality the actual software process flows. Figure 14.2 shows the tasks involved in quality assurance.

click to expand
Figure 14.2: Quality-assurance tasks.

Quality assurance supports the project manager in planning a project, particularly when planning suitable verification and validation measures and when adapting processes to the project requirements.

Quality assurance monitors the process, comparing the actual process with what is documented. In particular, quality assurance checks whether or not activities (e.g., project meetings, document reviews, tests) are actually taking place and that the intermediate results and the final product resemble what was defined in the process. Although this may sound like watching and knowing better, a good project involves all team members in the planning phase, and the process is based on their realistic approach. Ideally, quality assurance is the team's conscience, reminding everybody of their good intentions. Monitoring a process includes collection of data— about compliance with the deadlines, number of errors found during verification activities, and similar things. This data provides information about how appropriate and efficient the process really is.

Quality assurance involves both the project team and corporate managers. It prepares the collected data for presentation and updates the project team about the efficiency of reviews and (system) tests, number of reported problems to be removed, deviation from the planning milestones, and the like. In addition to pure information, quality assurance also deals with significant deviations from the process. Assume, for example, that quality assurance finds that a certain member of the team does not develop tests for his or her modules. The quality assurance process involves such steps as discussion of the problem with the developer, talking about the motivation for tests (perhaps based on this book), and asking the developer to create the tests. If the developer does not react to encouragement and persuasion, quality assurance has to report the problem to the project manager, because the product's quality is at stake, and the project manager will have to solve the problem together with the team member.

In addition, quality assurance assumes an unpopular policing function. If problems cannot be removed within the project (e.g., when the project manager thinks that there is actually no time to be wasted for testing), then management should be informed. This very unpleasant task is crucial for a company to prevent a faulty product from being delivered due to time shortage, thereby aggravating the customer.

If only for this reporting to the line management, it would be beneficial to assign quality assurance to staff outside the regular project hierarchy; in particular, this would ensure that the project manager can't silence them when they come up with bad news. Many of the new development models, (like RUP and XP discussed further in the sections that follow) do not mention quality assurance explicitly as a role. Instead, they allocate these tasks to the project manager, a "coach," or the customer on site. Even a strong standard like CMMI [CMU00] explicitly allows quality assurance to be assumed by all project members jointly, without specifically appointing a quality manager, especially in organizations with an open communications culture. However, CMMI recommends in such cases that quality assurance should be implemented by qualified staff according to a well-defined plan rather than just hoping for the best.

Unit tests are relevant for some of the activities introduced above. Creating unit tests by the test-first principle is a constructive quality measure, because it improves our design. Conducting unit tests is also a verification measure. The activities and results of unit tests are part of the development process and thus subject to quality assurance.




Unit Testing in Java. How Tests Drive the Code
Unit Testing in Java: How Tests Drive the Code (The Morgan Kaufmann Series in Software Engineering and Programming)
ISBN: 1558608680
EAN: 2147483647
Year: 2003
Pages: 144
Authors: Johannes Link

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net