Introduction

I live in a software development world where product development is not an orderly consistent march toward a tangible goal. "The Project Plan" usually consists of a laundry list of functions dropped off by somebody from marketing. Management embellishes "The Plan" with start and end dates that are of highly questionable origins and totally unreachable. The design and implementation of the product are clandestinely guarded by developers. The product routinely arrives in test virtually unannounced and several weeks late. The tester has not finished the test plan because no one is quite sure what the thing does. The only sure thing is the product must ship on time, next week.

That is software development-chaotic and harried. This book is dedicated to the proposition that this development system is primitive and enormously wasteful. This book presents several methods that provide better ways to perform the business of understanding, controlling, and delivering the right product to the market on time. These methods, taken singly or in groups, provide large cost savings and better-quality products for software developers.

I am a practitioner. I work where the rubber meets the road. I am often present when the user puts their hands on the product for the first time. I deal with real solutions to real problems. I also deal with the frustration of both the customers (who are losing money because the product is failing in some way) and the front-line support people. Front-line support is typically caught in the middle between development groups, who have other priorities, and the customer, who needs the system fixed "right now."

I work with the developer, whose job is to write good code. Developers do not have time to fill out all those forms quality assurance wants, or to compose an operations document that the test and support groups need. I work with the testers, who really don't know what's going on back there in the system. They keep breaking it, but they can't reproduce the problems for development. And I work with the document writers, who can't understand why the entire user interface changed just two weeks before the end of the test cycle.

My role is to prevent failures and enhance productivity through automation and process optimization. I work primarily on applications running in large networks. These systems are huge and contain a variety of components that need to be tested. Typically, there are object-oriented modules and graphical user interfaces (GUIs), and browser-based interfaces. These applications typically interact with databases, communications networks, specialized servers, and embedded code, driving specialized hardware-and all of them need to be tested. The methods in this book are distilled from experiences, both failures and successes, with projects that have touched all of these areas.

This is also a work about "how to solve problems," so it is rich with commentary on human factors. Systems are designed, written, integrated, tested, deployed, and supported by human beings, for human beings. We cannot ignore the fact that human factors play a major role in virtually all system failures.

What This Book Is About

This book is a software tester's guide to managing the software test effort. This is not a formula book of test techniques, though some powerful test techniques are presented. This book is about defensible test methods. It offers methods and metrics that improve the test effort, whether or not formal test techniques are used. It is about how to use metrics in the test effort. There is no incentive to take measurements if you don't know how to use the results to help your case, or if those results might be turned against you. This book shows how to use measurement to discover, to communicate those discoveries to others, and to make improvements.

Some time back I was presenting an overview of these methods at a conference. Part of the presentation was a case study. After these methods were applied, a test inventory was built, and the risk analysis was performed for the system, it was determined within this case study that the optimal test coverage given the time and resources allowed was 67 percent coverage of the entire test inventory.

During the question-and-answer session that followed my presentation, a very distinguished and tall fellow practitioner (he stands well over six feet) said, "Excuse me for mentioning this, but it strikes me that you are a very small person. I was wondering where you find the courage to tell your managing director that you only plan to test 67 percent of the system?"

My answer: "It is true that I am only 5'6", but I am big on the truth. If management wants to give me enough time and resources to test every item on the inventory, I will be happy to do so. But if they want me to do with less than that, I am not going to soft sell the fact that they will get less than 100 percent test coverage. If there isn't time and resources to test everything, then I want to be sure that the tests conducted are the most important tests."

I am also going to tell management how good that selection of tests was, how many bugs the test effort found, how serious they were and how much it cost to find them, and if possible, how much was saved because we found and removed them. I will measure the performance of the test effort and be able to show at any time whether we are on schedule or not, if the error densities are too high, or if the bug-fix rate is too low. If we cannot stay on schedule, I can give management the high-quality information it needs to do what it does best, specifically, manage the situation.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net