Overview of MITs

The Most Important Tests (MITs) method was developed as an aid to sizing test efforts based on the risk of failure in the system. While it was developed to be used mainly in top-down system, integration, and function testing, the methods are viable for all levels of testing. The core of the Most Important Tests method is a form of statistical testing where testers use several techniques to identify the areas that need to be tested and to evaluate the risks associated with the various components and features and functions of the project. These risks are translated into a prioritized ranking system that identifies the most important areas for testing to focus upon. As part of this process, testers and management can focus the test effort most constructively. The thoroughness of the testing can then be agreed upon in advance and budgeted accordingly.

In the ideal situation, the tester, having completed a thorough analysis, presents a test plan to management and negotiates for the time and resources necessary to conduct a comprehensive test effort. In reality, the test effort is trapped in the space between the end of the development cycle and the project release date. The impact of this constraint can vary depending on the timeliness of the code turnover from development and the flexibility of the release date. In most cases, trade-offs will have to be made in order to fit the test effort into the necessary time frame. The MITs method provides tools to help you make these trade-off decisions.

If you are in an Agile development effort, the design changes daily. You may get new code every day as well. The tests you ran yesterday may be meaningless today. Planning tests is a waste of time, and you don't have any time to waste. The manager wants to know if the effort is on schedule, but half the functions that you had been testing (and had considered complete) are not in the latest release of the code. Your developer has decided that she can't fix the bug that is blocking your other testing until your business partner (the customer) decides on the sequence of the Q&A dialogs. How do you explain all this to your team leader? MITs can help you with this as well.

What MITs Does

In the planning phase, the MITs method provides tools for sizing that allow the test effort to be fitted into a specified time frame. The method allows testers and managers to see the impact of trade-offs in resources and test coverage associated with various time lines and test strategies. The method uses worksheets and enumeration to measure time costs/savings associated with various trade-offs. The MITs tools, such as the worksheets and the test inventory, serve as aids in negotiating resources and time frames for the actual test effort.

During the testing phase, MITs tools facilitate tracking testing progress and determining the logical end of the test effort. The method uses S-curves for estimation, test tracking, and status reporting. S-curves show the status of the testing and the system at a glance. The curves show the rate of progress and the magnitude of the open issues in the system. The graphs also show both probable end-of-the-test effort and indicate clearly when the test set has exhausted its error-finding ability.

The MITs method measures the performance of the test effort so that test methods, assumptions, and inventories can be adjusted and improved for future efforts. A performance metric based on the percentage of errors found during the test cycle is used to evaluate the effectiveness of the test coverage. Based on this metric, test assumptions and inventories can be adjusted and improved for future efforts.

How MITs Works

The process works by answering the following questions:

1. 

What do we think we know about this project?

we find out by stating (publishing) the test inventory. the inventory contains the list of all the requirements and specifications that we know about and includes our assumptions. in a rad effort, we often start with only our assumptions, because there may not be any formal requirement or specifications. you can start by writing down what you think it is supposed to do. in projects with formal specifications, there are still assumptions-for example, testing the system on these three operating systems will be adequate. if we do not publish our assumptions, we are deprived of a valuable opportunity to have incorrect assumptions corrected.

2. 

How big is the test effort?

how many tests are there? we find out by enumerating everything there is to test. this is not a count of the things we plan to test; it is a count of all the tests that can be identified. this begins the expansion of the test inventory.

3. 

If we can't test everything, what should we test?

the most important things, of course! we use ranking criteria, to prioritize the tests, then we will use mits risk analysis to determine the most important test set from the inventory.

4. 

How long will the effort take?

once the test set has been identified, fill out the mits sizing worksheet to size and estimate the effort. the completed worksheet forms the basis for the test agreement.

5. 

How much will it cost? (How much can we get?)

negotiate with management for the resources required to conduct the test effort. using the worksheet, you can calculate how many tests, testers, machines, and so on will be required to fit the test effort into the desired time line. use the worksheet to understand and explain resource and test coverage trade-offs in order to meet a scheduled delivery date.

6. 

How do I identify the tests to run?

use the mits analysis and the test inventory to pick the most important areas first, and then perform path and data analysis to determine the most important tests to run in that area. once you have determined the most important tests for each inventory item, recheck your inventory and the sizing worksheet to make sure your schedule is still viable. renegotiate if necessary. start running tests and develop new tests as necessary. add your new tests to the inventory.

7. 

Are we on schedule? Have we tested enough?

use s-curves to track test progress and help determine the end of the test effort.

8. 

How successful was the test effort? Was the test coverage adequate? Was the test effort adequate?

use the performance metric to answer these questions and to improve future test efforts. the historical record of what was accomplished last time is the best starting point for improvement this time. if the effort was conducted in a methodical, reproducible way, the chances of duplicating and improving it are good.

Answers

1. 

We find out by stating (publishing) the test inventory. The inventory contains the list of all the requirements and specifications that we know about and includes our assumptions. In a RAD effort, we often start with only our assumptions, because there may not be any formal requirement or specifications. You can start by writing down what you think it is supposed to do. In projects with formal specifications, there are still assumptions-for example, testing the system on these three operating systems will be adequate. If we do not publish our assumptions, we are deprived of a valuable opportunity to have incorrect assumptions corrected.

2. 

How many tests are there? We find out by enumerating everything there is to test. This is not a count of the things we plan to test; it is a count of all the tests that can be identified. This begins the expansion of the test inventory.

3. 

The most important things, of course! We use ranking criteria, to prioritize the tests, then we will use MITs risk analysis to determine the most important test set from the inventory.

4. 

Once the test set has been identified, fill out the MITs sizing worksheet to size and estimate the effort. The completed worksheet forms the basis for the test agreement.

5. 

Negotiate with management for the resources required to conduct the test effort. Using the worksheet, you can calculate how many tests, testers, machines, and so on will be required to fit the test effort into the desired time line. Use the worksheet to understand and explain resource and test coverage trade-offs in order to meet a scheduled delivery date.

6. 

Use the MITs analysis and the test inventory to pick the most important areas first, and then perform path and data analysis to determine the most important tests to run in that area. Once you have determined the most important tests for each inventory item, recheck your inventory and the sizing worksheet to make sure your schedule is still viable. Renegotiate if necessary. Start running tests and develop new tests as necessary. Add your new tests to the inventory.

7. 

Use S-curves to track test progress and help determine the end of the test effort.

8. 

Use the performance metric to answer these questions and to improve future test efforts. The historical record of what was accomplished last time is the best starting point for improvement this time. If the effort was conducted in a methodical, reproducible way, the chances of duplicating and improving it are good.

As I said before, in the ideal scenario, you do all of these things because all these steps are necessary if you plan to do the very best test effort possible. The next thing to recognize is that the real scenario is rarely "ideal." The good news is this method is flexible, even agile. Any steps you perform will add to the value of the test effort. If you don't do them all, there is no penalty or detriment to your effort. Next, the steps are listed in the order that will give you the best return on your investment. This order and the relative importance of the steps is different for different types of development projects.

Different environments have different needs, and these needs mandate different priorities in the test approach. I am going to point out some of the differences and then present different ordering of the steps to complement each type of effort. Finally, I will give specific examples of three different development efforts that were all part of the same systems development effort. The MITs method was used in varying amounts in each of these development test efforts, and it was also used in the system integration test effort that successfully integrated these individual systems.

How to Succeed with MITs

A couple of factors will influence which methods and metrics are the right ones for you to start with and which ones are the most useful to you. In fact, you most probably use some of these methods already. The first factor is the ease of implementation. Some of these methods and metrics are much easier to implement and to show a good return on the investment than others. Another factor is the development method that is being used in the project you are approaching.

Plan-driven development efforts use the same MITs methods as Agile development efforts, characterized as heavyweight and lightweight, but their goals and expectations are different. So the priorities placed on the individual MITs steps are very different. I will go into this in more detail in the next section. I mention it here because, over the years, I have collected lots of feedback from students on these methods. These students come from both heavyweight and lightweight efforts. I find it interesting that testers from both types of efforts agree on the usefulness and ease of implementation of the MITs methods.

Methods That Are Most Useful and Easiest to Implement

The following lists show the methods that have been identified as most useful. They are listed according to the respondent's perceptions of their ease of implementation.

EASIEST TO IMPLEMENT

  • Bug tracking and bug-tracking metrics

  • The test inventory and test coverage metrics

  • Planning, path analysis, and data analysis

  • MITs ranking and ranking criteria (risk analysis)

  • The test estimation worksheet

  • Test performance metrics

MORE DIFFICULT TO IMPLEMENT

  • S-curves

  • Test rerun automation

  • Automated test plan generation

Most companies already have well-established bug tracking tools and metrics. Some have developed very sophisticated intranet tracking systems that carry all the way through testing to system support and customer support.

Most test efforts rely heavily on their bug tracking metrics. For the most part, the bug metrics in use are fundamental metrics with a few derived metrics like mean time between failure and bugs found per hour. MITs use techniques that allow you to perform analysis based on several types of measurements taken together. Several examples of how to use these techniques to get a superior view of the state of a software system are provided in this book.

The one tool that I have seen come into its own over the last 10 years is the test inventory. Today, a test inventory is considered a requirement in most test efforts, even if it is a continually evolving one. Ten years ago, almost no one was using an inventory. Still, there is a lot more that the test inventory can do for you as a working tool, as you will see in the next two chapters.

If you are already using the test inventory, you will see some examples of how to get more value from it and how your inventory can help you make the step up to path, data, and risk analysis. Once you are performing risk analysis (even without doing any path and data analysis), you can use the test sizing worksheet-a tool that will change your life.

Test performance metrics and S-curves are closely related and can be implemented at the same time if the team has the graphing tool required to produce S-curves. [1] Ironically, the Agile groups I have worked with have been the ones to see the value in S-curves and take the time and energy to implement them. This effort is due to the Agile method's need to make quick design changes during the development in the process.

Agile managers will pay a lot for accurate information about the real status of a project from day to day. On the heavyweight front, Boeing is the only company that I know of who uses them regularly. Boeing has been using S-curves for years.

The S-curve is one of the best project-tracking imaging tools extant. These graphs provide critical progress information at a glance. Agile efforts usually have collaboration technologies in place that make it easier for them to get team members to report the test numbers that fuel S-curves. So they find it easier to institute this powerful tool than the plan-driven efforts that must go through a more complex and difficult documentation process to accomplish the same reporting.

Test rerun automation is one of the most difficult toolsets to get a positive return from, and yet just about everyone has tried it. The thing to remember about automated test rerun tools is that you only get a payback if the test is rerun-a lot.

Agile efforts are dynamic. The product is continuously evolving, and so a static test has a short life span. Capture replay is of little use to an Agile tester. In heavyweight projects, the time required to create and maintain these tests is often the issue. Even though the tests might be rerun a lot, over a long period of time, management is often hesitant to invest in the creation of tests unless they are quite certain to be replayed and the investment recaptured.

[1]The graphing tool that comes with Microsoft Office, Microsoft Graph, is sufficient to do this.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net