What Is Testing?

The Test discipline of the RUP product acts in many respects as a service provider to the other disciplines. Testing focuses primarily on the evaluation or assessment of quality and is realized through a number of core practices:

  • Finding and documenting gaps in GEQ

  • Generally advising team members about perceived software quality

  • Validating through concrete demonstration the assumptions made in design and requirement specifications

  • Validating that the software product functions as it was designed to function

  • Validating that the requirements have been implemented appropriately

An interesting, but somewhat subtle difference between the Test Discipline and the other disciplines in RUP is that testing is essentially tasked with finding and exposing weaknesses in the software product. For this effort to be successful, it necessitates a somewhat negative and destructive, rather than constructive, approach: "How could this software fail?" The challenge is to avoid both the approach that does not suitably and effectively challenge the software and expose its inherent problems and weaknesses, and the approach that is so negative that it is unlikely to ever find the quality of the software product acceptable.

Based on information presented in various surveys and essays , software testing is said to account for 30 to 50 percent of total software development costs. It is, therefore, perhaps surprising to note that most people believe computer software is not well tested before it is delivered. This contradiction is rooted in a few key issues:

  • Testing is usually done late in the lifecycle, keeping project risks and the number of unknown factors very high for far too long, rather than testing with every iteration, as the RUP advocates.

  • Testability is not considered in the product design (again, contrary to the RUP) and thereby increases the complexity of testing many times over, making test automation difficult, and in some cases making certain types of tests impossible .

  • Test planning is done in isolation from the system under test (SUT), before any actual testing, when the least is known about the system under test. In contrast, the RUP advocates detailed test planning by iteration, using the experience of the previous iteration.

Beyond these issues, we also have to acknowledge that testing software is enormously challenging. The different ways a given program can behave are unquantifiable, and the number of potential tests for that program is arguably limited only by the imagination of the tester.

Often, testing is typically done without a guiding methodology, resulting in a wide variance of success from project to project and organization to organization; success is primarily a factor of the quality, skills, and experience of the individual tester. Testing also suffers when insufficient use is made of productivity tools, to make the laborious aspects of testing manageable. A lot of testing is conducted without tools that allow the effective management of test assets such as extensive Test Data, without tools to evaluate detailed Test Results, and without appropriate support for automated test execution. While the flexibility of use and complexity of software makes "complete" testing an impossible goal in all but the most trivial systems, an appropriately chosen methodology and the use of proper supporting tools can improve the productivity and effectiveness of the software testing effort.

For "safety-critical" systems where a failure can harm people (such as air-traffic control, missile guidance, or medical delivery systems), high-quality software is essential for the success of the system. For a typical MIS system, the criticality of the system may not be as immediately obvious as in a safety-critical system, but it's likely that a serious defect could cost the business using the software considerable expense in lost revenue or possible legal costs. In this "information age" of increasing demand on the provision of electronically delivered services over media such as the Internet, many MIS systems are now considered "mission-critical" ”that is, when software failures occur in these systems, companies cannot fulfill their functions and experience massive losses.

Many projects do not pay much attention to performance testing until very late in the development cycle. For systems that will be in continuous use (24/7), for distributed systems, and for systems that must scale up to large numbers of simultaneous users, it is important to assess early and continuously to verify that the expected performance will be met. This can start in the Elaboration phase when enough of the architecture is in place to start exercising the system under various load conditions.

A continuous approach to quality, initiated early in the software lifecycle, can significantly lower the cost of completing and maintaining the software. This greatly reduces the risk associated with deploying poor-quality software.



The Rational Unified Process Made Easy(c) A Practitioner's Guide to Rational Unified Process
Programming Microsoft Visual C++
ISBN: N/A
EAN: 2147483647
Year: 2005
Pages: 173

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net