Reliability modeling is an attempt to summarize the complex reality in precise statistical terms. Because the physical process being modeled (the software failure phenomenon ) can hardly be expected to be so precise, unambiguous statements of the assumptions are necessary in the development of a model. In applications, the models perform better when the underlying assumptions are met, and vice versa. In other words, the more reasonable the assumptions, the better a model will be. From the preceding summary of several reliability growth models, we can see that earlier models tend to have more restrictive assumptions. More recent models tend to be able to deal with more realistic assumptions. For instance, the J-M model's five assumptions are:
Together these assumptions are difficult to meet in practical development environments. Although assumption 1 does not seem to pose problems, all the others pose limitations to the model. The Littlewood models, with the concept of error size , overcame the restriction imposed by assumption 3. The Goel-Okumoto imperfect debugging model is an attempt to improve assumptions 4 and 5.
Assumption 2 is used in all time between failures models. It requires that successive failure times be independent of each other. This assumption could be met if successive test cases were chosen randomly. However, the test process is not likely to be random; testing, especially functional testing, is not based on independent test cases. If a critical fault is discovered in a code segment, the tester may intensify the testing of associated code paths and look for other faults. Such activities may mean a shorter time to next failure. Strict adherence to this assumption therefore is not likely. Care should be taken, however, to ensure some degree of independence in data points when using the time between failures models.
The previous assumptions pertain to the time between failures models. In general, assumptions of the time between failures models tend to be more restrictive. Furthermore, time between failures data are more costly to gather and require a higher degree of precision.
The basic assumptions of the fault count model are as follows (Goel, 1985):
As discussed earlier, the assumption of a homogeneous testing effort is the key to the fault count models. If this assumption is not met, some normalization effort or statistical adjustment should be applied. The other two assumptions are quite reasonable, especially if the model is calendar-time based with wide enough intervals (e.g., weeks).
For both classes of models, the most important underlying assumption is that of effective testing. If the test process is not well planned and test cases are poorly designed, the input data and the model projections will be overly optimistic. If the models are used for comparisons across products, then additional indicators of the effectiveness or coverage of testing should be included for the interpretation of results.
What Is Software Quality?
Software Development Process Models
Fundamentals of Measurement Theory
Software Quality Metrics Overview
Applying the Seven Basic Quality Tools in Software Development
Defect Removal Effectiveness
The Rayleigh Model
Exponential Distribution and Reliability Growth Models
Quality Management Models
In-Process Metrics for Software Testing
Complexity Metrics and Models
Metrics and Lessons Learned for Object-Oriented Projects
Availability Metrics
Measuring and Analyzing Customer Satisfaction
Conducting In-Process Quality Assessments
Conducting Software Project Assessments
Dos and Donts of Software Process Improvement
Using Function Point Metrics to Measure Software Process Improvements
Concluding Remarks
A Project Assessment Questionnaire