According to Paulk and colleagues (1995), the CMM-based assessment approach uses a six-step cycle. The first step is to select a team. The members of the team should be professionals knowledgeable in software engineering and management. In the second step, the representatives of the site to be appraised complete the standard process maturity questionnaire. Then the assessment team performs an analysis of the questionnaire responses and identifies areas that warrant further exploration according to the CMM key process areas. The fourth step is for the assessment team to conduct a site visit to gain an understanding of the software process followed by the site. At the end of the site visit comes step 5, when the assessment team produces a list of findings that identifies the strengths and weakness of the organization's software process. Finally, the assessment team prepares a key process area (KPA) profile analysis and presents the results to the appropriate audience.
The SEI also developed and published the CMM-Based Appraisal for Internal Process Improvement (CBA IPI) (Dunaway and Masters, 1996). The data collected for CBA IPI is based on key process areas of the CMM as well as non-CMM issues. For an assessment to be considered a CBA IPI, the assessment must meet minimum requirements concerning (1) the assessment team, (2) the assessment plan, (3) data collection, (4) data validation, (5) the rating, and (6) the reporting of assessment results. For example, the assessment team must be led by an authorized SEI Lead Assessor. The team shall consist of between 4 and 10 team members. At least one team member must be from the organization being assessed, and all team members must complete the SEI's Introduction to the CMM course (or its equivalent) and the SEI's CBA IPI team training course. Team members must also meet some selection guidelines. With regard to data collection, the CBA IPI relies on four methods : the standard maturity questionnaire, individual and group interviews, document reviews, and feedback from the review of the draft findings with the assessment participants .
The Standard CMMI Assessment Method for Process Improvement (SCAMPI) was developed to satisfy the CMMI model requirements (Software Engineering Institute, 2000). It is also based on the CBA IPI. Both the CBA IPI and the SCAMPI consist of three phases: plan and preparation, conducting the assessment onsite, and reporting results. The activities for the plan and preparation phase include:
The activities for the onsite assessment phase include:
The activities of the reporting results phase include:
The description of the CBA IPI and the SCAMPI assessment cycle appears to be more elaborate. Its resemblance to the assessment approach outlined by Paulk and colleagues in 1995 remains obvious.
The SPR assessment process involves similar steps (Jones, 1994). The initial step is an assessment kickoff session (1), followed by project data collection (2), and then individual project analysis (3). A parallel track is to conduct management interviews (4). The two tracks then merge for benchmark comparison, aggregate analysis, and interpretation (5). The final phase is measurement report and improvement opportunities (6). Data collection and interviews are based on the structured SPR assessment questionnaire. The SPR assessment approach uses multiple models and does not assume the same process steps and activities for all types of software.
Table 16.1. Zahran's Generic Phases and Main Activities of Software Process Assessment
Phase |
Sub-phase |
Main activities |
---|---|---|
Preassessment |
Preplanning |
Understanding of business context and justification, objectives, and constraints Securing sponsorship and commitment |
Assessment |
Planning |
Selection of assessment approach Selection of improvement road map Definition of assessment boundaries Selection of assessment team Launching the assessment Training the assessment team Planning fact gathering, fact analysis and reporting activities |
Fact gathering |
Selecting a fact gathering approach (e.g., questionnaire, interviews, and group discussion) Defining the target interviewees Distributing and collecting questionnaire responses Conducting the interviews |
|
Fact analysis |
Analysis of questionnaire responses Analysis of facts gathered in the interviews Analysis of the evidence gathered Collective analysis of the data gathered Calibration of the findings against the road map Identifying strengths and weaknesses and areas of improvement |
|
Reporting |
Documenting the findings: strengths and weaknesses Documenting the recommendations |
|
Postassessment |
Action plan for process improvement |
Implementing the process improvement actions Managing and monitoring the process improvement plan |
From Software Process Improvement, Practical Guidelines for Business Success, by Sami Zahran (Table 8.3, p. 161). 1998 Addison-Wesley Longman. Reprinted by permission of Pearson Education, Inc. |
While each assessment approach has its unique characteristics, a common schema should apply to all. Zahran (1997) developed a generic cycle of process assessment that includes four phases: planning, fact finding, fact analysis, and reporting. Besides the assessment cycle per se, a preassessment and preplanning phase and a postassessment and process improvement plan phase are in Zahran's generic cycle. The main activities of the phases are shown in Table 16.1.
The generic phases and the main activities within each phase serve as a useful overall framework for assessment projects. Zahran also successfully mapped the current process assessment approaches into this framework, including the CMM, the Trillium model, the BOOTSTRAP methodology, and the ISO/IEC 15504 draft standard for software process assessment. In the next sections when we discuss our method for software project assessments, we will refer to the main activities in this framework as appropriate.
What Is Software Quality?
Software Development Process Models
Fundamentals of Measurement Theory
Software Quality Metrics Overview
Applying the Seven Basic Quality Tools in Software Development
Defect Removal Effectiveness
The Rayleigh Model
Exponential Distribution and Reliability Growth Models
Quality Management Models
In-Process Metrics for Software Testing
Complexity Metrics and Models
Metrics and Lessons Learned for Object-Oriented Projects
Availability Metrics
Measuring and Analyzing Customer Satisfaction
Conducting In-Process Quality Assessments
Conducting Software Project Assessments
Dos and Donts of Software Process Improvement
Using Function Point Metrics to Measure Software Process Improvements
Concluding Remarks
A Project Assessment Questionnaire