Assessing Solution Stability


Lead Advocacy Group: Test

Various forms of testing are used to assess stability. As discussed next, common test types include these:

  • Regression testing

  • Functional testing

  • Usability testing

  • System testing

Although the names might differ a bit in various industries, the intent and purpose of each are the sameassess solution stability from various perspectives. Taken together, the output of these tests forms a holistic view of a solution's stability and therefore its readiness.

Regression Testing

When an iterative approach is used, it is possible that new builds disrupt previously completed solution components. As such, regression testing is used to retest what was previously built and successfully tested to make sure it still works. Depending on what is called for in a test plan, regression testing falls somewhere on the scale from complete retesting of all previously completed components to just exploratory testing that spot-checks selected functionality and capabilities.

Functional Testing

As mentioned previously, functional testing assesses a solution to see if it behaves as desired and as expected as well as functions according to documented requirements. This includes evaluating the overall flow of a solution, how easy it is to navigate through a solution, how intuitive it is to enter data, how easy it is to retrieve data, and how nicely a solution presents data. As such, functional testing closely aligns with previously defined user scenarios and activities and is not typically performed until enough solution components have been developed and integrated to facilitate this type of testing.

Usability Testing

A solution might be technically compliant with its requirements, but is it usable? Usability testing answers this question. It is a user-perspective set of tests that is similar to functional testing, but instead of assessing how a solution behaves, usability testing concentrates on how users and administrators interact with a solution. Sometimes it measures how intuitive a solution is.

Keep in mind that usability extends to all aspects of a solution and its supporting materials such as configuration guides. That is, in addition to assessing the flow of a solution, the other aspects of a solution such as administrator manuals, online help, and other aspects of a solution need to be considered, too.

System Testing

Unlike the other types of testing just discussed, system testing is really a category of tests used to assess a solution as a whole and to evaluate how a solution integrates with other solutions already in production and operates within its target environment(s). As such, system testing needs to be performed in production or in a production-like environment. System testing commonly includes the following types of tests:

  • Deployment testing

  • Disaster recovery testing

  • Integration testing

  • Performance testing

  • Capacity testing

Deployment Testing

Deployment testing (sometimes called release testing) evaluates the procedures to roll out a solution to its final destination (e.g., production environment). It considers deployment of all of the different solution aspects, such as deployment tools and scripts. It verifies that deployment documentation is not only accurate but also sufficiently detailed for operations personnel. As part of this testing, it gives an operations team an opportunity to identify issues that could prevent successful implementation.

Disaster Recovery Testing

Systems-level disaster recovery (DR) testing of a solution in production is typically very hard to perform because not too many organizations allow a team to "break" production just to test DR. Conversely, neither do many organizations have the funding to build out a full mirror of the production environment just to test DR. This is one of those areas of testing that an organization tests as much as financially capable. Likely, DR testing will be limited to subsystem testing such as downing a clustered server to make sure it properly fails over. It might also include smaller tests such as pulling a drive from a redundant array of independent disks (RAID) array to make sure operations continue.

Integration Testing

Integration testing validates successful assimilation of a solution into its target environment(s). In involves looking at the solution from the environment's perspective in ensuring the introduction of a solution does not impede or degrade existing operations and looking at it from a solution's perspective in making sure a solution is able to coexist and function within each target environment. This could be very challenging because each environment can have a range of legacy solutions built from vast collections of new and old technologies. Each interaction with each previously deployed solution should be another integration test case.

A big challenge with integration testing is getting legacy solutions to participate in testing. Often, older legacy solutions are harder to integrate with and often are closed systems. For example, a critical mainframe-based solution might generate only analysis data on a monthly cycle. Testing with this system will need to work around this cycle, leaving small windows of opportunity for testing.

Performance Testing

Often, a solution has time-based performance requirements. Performance testing measures solution performance in production or a production-like environment (e.g., staging environment). Depending on what is called for in a test plan, performance testing can involve testing individual subsystems (e.g., order processing) up through testing whether all solution components work together in their target environment(s). An example of a performance requirement to test is: a solution must process a "create new account" transaction within 3 seconds under a load of 100 concurrent users and within 5 seconds under a load of 500 concurrent users. Testing this requirement involves testing a solution as a whole but also might involve testing each component part involved in handling the request (e.g., 0.1 seconds allocated to the Web server).

Capacity Testing

Capacity testing validates capacity and growth planning and enables a team to understand usability and operational behaviors and characteristics as users and data loads are incrementally increased beyond what is expected in production. Testing involves not only assessing a solution but also the environment(s) in which the solution operates (e.g., assess network impacts).

Testing usability under load in addition to operational performance is important because sometimes a service might appear as if it is available and servicing users, but from a usability perspective, the service has degraded so much that it is unacceptable to users. Another example is when network capacity is added as part of deploying a solution, but because of network latency, a solution does not provide adequate perceived performance.




MicrosoftR Solutions Framework Essentials. Building Successful Technology Solutions
Microsoft Solutions Framework Essentials: Building Successful Technology Solutions
ISBN: 0735623538
EAN: 2147483647
Year: 2006
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net