A Proposed Software Project Assessment Method

We propose a software project assessment method as shown in Figure 16.1, which is based on our project assessment experience over the past many years . We will discuss each phase with some details, but first there are several characteristics of this method that may be different from other assessment approaches:

Figure 16.1. A Proposed Software Project Assessment Method

graphics/16fig01.gif

  • It is project based.
  • There are two phases of facts gathering and the phase of a complete project review precedes other methods of data collection. Because the focus is on the project, it is important to understand the complete project history and end-to-end processes from the project team's perspective before imposing a questionnaire on the team.
  • This method does not rely on a standard questionnaire. There may be a questionnaire in place from previous assessments, or some repository of pertinent questions maintained by the process group of the organization. There may be no questionnaire in place and an initial set of questions needs to be developed in the preparation phase. In either case, customization of the questionnaire after a complete project review is crucial so that each and every question is relevant.
  • Observations, analysis, and possible recommendations are part of an ongoing process beginning at the start of the assessment project. With each additional phase and input, the ongoing analysis and observations are being refuted, confirmed, or refined. This is an iterative process.
  • The direct input by the project team/development team with regard to strengths and weaknesses and recommendations for improvement is important, as reflected in steps 3 through 6, although the final assessment still rests on the assessment team's shoulders.

16.4.1 Preparation Phase

The preparation phase includes all planning and preparation. For an assessment team external to the organization whose project is to be assessed, a good understanding of the business context and justification, objectives, and commitment is important. Since most of the project assessments are done by personnel within the organization, or from a separate division of a company, this is normally not needed. In this phase, a request for basic project data should be made. General information on the type of software, size , functions, field performance, skills and experience of the development team, organizational structure, development process, and language used is important for the assessment team to start formulating some frames of reference for the assessment. This data doesn't have to be specific and precise and should be readily available from the project team. If there is no questionnaire from a previous similar project, the assessment team should start developing a battery of questions based on the basic project data. These questions can be revised and finalized when the facts gathering phase I is completed.

For overall planning, we recommend the assessment be run as a project with all applicable project management practices. It is important to put in place a project plan that covers all key phases and activities of the assessment. For internal assessments, a very important practice at the preparation phase is to obtain a project charter from the sponsor and commitment from the management team of the project being assessed. The project charter establishes the scope of the assessment and the authority of the assessment team. It should be one page or shorter and probably is best drafted by the assessment leader and signed and communicated by the sponsor executive.

Another easily neglected activity in the preparation phase is a project closeout plan. Good project management calls for planning for the closeout on day 1 of the project. A closeout plan in this case may include the kind of reports or presentations that will be delivered by the assessment team, the audience, and the format.

16.4.2 Facts Gathering Phase 1

The first phase of facts gathering involves detailed review of all aspects of the project from the project team's perspective. The format of this phase may be a series of project team's descriptions or presentations. In the assessment team's request for information, at least the following areas should be covered:

  • Project description and basic project data (size, functions, schedule, key dates and milestones)
  • Project and development team information (team size, skills, and experience)
  • Project progress, development timeline, and project deliverables
  • End-to-end development process from requirements to testing to product ship
  • Sizing and schedule development, staffing
  • Development environment and library system
  • Tools and specific methodologies
  • Project outcome or current project status
  • Use of metrics, quantitative data, and indicators
  • Project management practices
  • Any aspects of the project that the project team deems important

The assessment team's role in this phase is to gather as much information as possible and gain a good understanding of the project. Therefore, the members should be in a listening mode and should not ask questions that may mislead the project team. Establishing the whats and hows of the project are the top priority, and sometimes it is necessary to get into the whys via probing techniques. For example, the project may have implemented a joint test phase between the development group and the independent test team to improve the test effectiveness of the project, and to make sure that the project meets the entry criteria of the system verification test (SVT) phase. This is a "what" of the actual project practices. The joint test phase was implemented at the end of the functional verification test (FVT) and before SVT start during the SVT acceptance test activities. Independent test team members and developers were paired for major component areas. The possible gaps between the FVT and the SVT plans were being tested . The testing environment was a network of test-ing systems maintained by the independent group for SVT test. To increase the chances for latent defects to surface, the test systems were stressed by running a set of performance workloads in the background. This test phase was implemented because meeting SVT entrance criteria on time had been a problem in the past, and due to the formal hand-off between FVT and SVT, it was felt that there was room for improvement with regard to the communication between developers and independent testers. The project team thinks that this joint test practice contributed significantly to the success of the project because a number of additional defects were found before SVT (as supported by metrics), SVT entrance criteria were met on time, the testers and developers learned from each other and improved their communications as a result, and the test added only minimum time to the testing schedule so it didn't negatively affect the project completion date. These are the "hows" and "whys" of the actual implementation. During the project review, the project team may describe this practice briefly . It is up to the assessment team to ask the right questions to get the details with regard to hows and whys.

At the end of a project review, critical success factors or major reasons for failure should be discussed. These factors may also include sociological factors of software development, which are important (Curtis et al., 2001; DeMarco and Lister, 1999; Jones, 1994, 2000). For the entire review process, the assessment team's detailed note-taking is important. If the assessment team consists of more than one person and the project review lasts more than one day, discussions and exchange of thoughts among the assessment team members is always a good practice.

16.4.3 Questionnaire Customization and Finalization

Now that the assessment team has gained a good understanding of the project, the next step is to customize and finalize the questionnaire for formal data collection. The assumption here is that a questionnaire is in place. It may be from a previous assessment project, developed over time from the assessment team's experience, from a repository of questions maintained by the software engineering process group (SEPG) of the organization, or from a prior customization of a standard questionnaire of a software process assessment method. If this is not the case, then initial questionnaire construction should be a major activity in the preparation phase, as previously mentioned.

Note that in peer reviews (versus an assessment that is chartered by an executive sponsor), a formal questionnaire is not always used.

There are several important considerations in the construction of and finalization of a questionnaire. First, if the questionnaire is a customized version of a standard questionnaire from one of the formal process assessment methods (e.g., CMM, SPR, and ISO software process assessment guidelines), it must be able to elicit more specific information. The standard questionnaires related to process maturity assessment usually are at a higher level than is desirable at the project level. For example, the following are the first three questions of the Peer Reviews key process activity (KPA) in the CMM maturity questionnaire (Zubrow et al., 1994).

  1. Are peer reviews planned? (Yes, No, Does Not Apply, Don't Know)
  2. Are actions associated with defects that are identified during peer reviews tracked until they are resolved?
  3. Does the project follow a written organizational policy for performing peer reviews?

The following two questions related to peer design reviews were used in some project assessments we conducted :

  1. What is the most common form of design reviews for this project?

    • Formal review meeting with moderators, reviewers, and defect tracking, and issue resolution and rework completion are part of the completion criteria
    • Formal review but issue resolution is up to the owner
    • Informal review by experts of related areas
    • Codevelper (codesigner) informal review
    • Other ..... please specify
  2. To what extent were design reviews of the project conducted? (Please mark the appropriate cell in each row in the table)

     

    All Design Work Done Rigorously

    All Major Pieces of Design Items

    Selected Items Based on Criteria (e.g., Error Recovery)

    Design Reviews Were Occasionally Done

    Not Done

    Original design

             

    Design changes/rework

             

The differences between the two set of questions are obvious: One focuses on process maturity and organizational policy and the other focuses on specific project practices and degree of execution.

Second, a major objective of a project assessment is to identify the gaps and therefore opportunities for improvement. To elicit input from the project team, the vignette-question approach in questionnaire design can be used with regard to importance of activities in the development process. Specifically, the vignette questions include a question on the state of practice for the specific activity by the project and another question on the project team's assessment of the importance of that activity. The three following questions provide an example of this approach:

  1. Are there entry/exit criteria used for the independent system verification test phase?

    If yes, (a) please provide a brief description.

    (b) how is the criteria used and enforced?

  2. Per your experience and assessment, how important is this practice (entry/exit criteria for SVT) to the success of the project?

    • Very important
    • Important
    • Somewhat important
    • Not sure
  3. If your assessment in question 2 is "very important" or "important" and your project's actual practice did not match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture)? Please explain.

Third, it is wise to ask for the team's direct input on strengths and weaknesses of the project's practices in each major area of the development process (e.g., design, code, test, and project management). The following are the two common questions we used in every questionnaire at the end of each section of questions. Caution: These questions should not be asked before a description of the overall project practices are completed (by the project team) and understood (by the assessment team), otherwise the project team will be led prematurely to a self-evaluative mode. In this assessment method, there are two phases of facts gathering and asking these questions at the second phase is part of the design of the method.

  1. Is there any practice(s) by your project with regard to testing that you consider to be a strength and that should be considered for implementation by other projects? If so, please describe and explain.
  2. If you were to do this project all over again, what would you do differently with regard to testing and why?

The Appendix in this book shows a questionnaire that we have used as a base for customization for many software project assessments.

16.4.4 Facts Gathering Phase 2

In this phase, the questionnaire is administered to the project team including development managers, the project manager, and technical leads. The respondents complete the questionnaire separately. All sections of the questionnaire may not apply to all respondents. The responses are then analyzed by the assessment team and validated via a session with the project team. Conflicts of responses among the respondents and between information from the project review and the questionnaire responses should be discussed and resolved. The assessment team can also take up any topic for further probing. At the second half of the session, a brainstorming session on strengths and weaknesses, what's done right, what's done wrong, and what would the project team have done differently is highly recommended.

16.4.5 Possible Improvement Opportunities and Recommendations

As Figure 16.1 depicts, this phase runs parallel with other phases, starting from the preparation phase and ending with the final assessment report phase. Assessing the project's strengths and weakness and providing recommendations for improvement constitute the purpose of a project assessment. Because the quality of the observations and recommendations is critical, this activity should not be done mechanically. To accomplish this important task, the assessment team can draw on three sources of information:

  1. Findings from the literature. For example, Jones (1994, 1995, 2000) provides an excellent summary of the risks and pitfalls of software projects, and benchmarks and best practices by types of software, based on the SPR's experiences in software assessment. Another example is the assessment results from the CMM-based assessments published by the SEI. A familiarity with the frameworks and findings in the assessment literature enhances the breadth of the assessment team's framework for development of recommendations. Of course, you cannot use the findings in the literature for recommendations unless they are pertinent to the project being assessed. Those findings are great references, however, so the big-picture view will be maintained while combing through the tremendous amount of project-specific information. Based on Jones's findings (2002), the most significant factors associated with success and failure are the following:

    Successful projects

    • Effective project planning
    • Effective project cost estimating
    • Effective project measurements
    • Effective project milestone tracking
    • Effective project quality control
    • Effective project change management
    • Effective development processes
    • Effective communications
    • Capable project managers
    • Capable technical personnel
    • Significant use of specialists
    • Substantial volume of reusable materials

    Failing projects

    • Inadequate project planning
    • Inadequate cost estimating
    • Inadequate measurements
    • Inadequate milestone tracking
    • Inadequate quality control
    • Ineffective change control
    • Ineffective development processes
    • Ineffective communications
    • Ineffective project managers
    • Inexperienced technical personnel
    • Generalists rather than specialists
    • Little or no reuse of technical material
  2. Experience: The assessment team's experience and findings from previous project assessments are essential. What works and what doesn't, for what types of projects, under what kind of environment, organization, and culture? This experience-based knowledge is extremely valuable . This factor is especially important for internal project assessments. When sufficient findings from various projects in one organization are accumulated , patterns of successes and failure may emerge. This is related to the concept of the experience factory discussed by Basili (1995).
  3. Direct input from the project team . As discussed earlier in the chapter, we recommend placing direct questions in the questionnaire on strengths and weakness, and what the team would do differently. Direct input from the team provides an insider's view and at the same time implies feasibility of the suggested improvement opportunities. Many consultants and assessors know well that in-depth observations and good recommendations often come from the project team itself. Of course, the assessment team must evaluate the input and decide whether or not, and what part of, the project team's input will become their recommendations.

Jones's findings highlight the importance of good project management and project management methods such as estimating, measurements, and tracking and control. Sizing and schedule development without the support of metrics and experience from previous projects can lead to overcommitment and therefore project failure. This can happen even in well-established development organizations.

When developing improvement recommendations, feasibility of implementation in the organization's environment should be considered. In this regard, the assessment team should think like a project manager, software development managers, and team leaders . At the same time, recommendations for strategic improvements that may pose challenges to the team should not be overlooked. In this regard, the assessment team should think like the strategist of the organization. In other words, there will be two sets of recommendations and the assessment team will wear two hats when developing the recommendations. In either case, recommendations should be based on facts and analysis and should never be coverage-checklist type items.

As an example, the following segment of recommendations is extracted from a project assessment report that we were involved with. This segment addresses the intersite communication of a cross-site development project.

It was not apparent as to whether there were trust issues between the two development teams. However, given that the two teams have never worked together on the same project and the different environments at the two sites, there is bound to be at least some level of unfamiliarity if not a lack of trust. The following techniques could be employed to address these issues.

  • Nothing can replace face-to-face interaction. If the budget allows it, make time for the Leadership Team (managers, technical leads, project leads) to get together for face-to-face interaction. A quarterly "Leadership Summit" can be useful to review the results of the last 90 days and establish goals for the next 90 days. The social interaction during and after these meetings is as important as the technical content of meetings.
  • A close alternative to travel and face-to-face meetings is the use of video conferencing. Find a way to make video conference equipment available to your teams and encourage its use. While it is quite easy to be distracted with mail or other duties while on a teleconference call, it is much more difficult to get away with it on a video conference. Seeing one another as points are raised in a meeting allows for a measure of nonverbal communication to take place. In addition, the cameras can be focused on the white boards to hold chalk talks or discuss design issues.
  • A simple thing that can be done to enhance cross-site communications is to place a picture board at each site with pictures of all team members.
  • Stress the importance of a single, cross-site team in all area communications and make sure the Leadership Team has completely bought into and is promoting this concept. Ensure processes, decisions, and communications are based on technical merit without unwarranted division by site. One simple example is that the use of site-specific distribution lists may promote communications divided between the sites. We recommend the leadership team work to abolish use of these lists and establish distribution lists based on the needs and tasks of the project.

16.4.6 Team Discussions of Assessment Results and Recommendations

In this phase, the assessment team discusses its findings and draft recommendations with the project team and obtains its feedback before the final report is completed. The two teams may not be in total agreement, and it is important that the assessment team make such a declaration before the session takes place. Nonetheless, this phase is important because it serves as a validation mechanism and increases the buy-in of the project team.

16.4.7 Assessment Report

The format of the report may vary (a summary presentation, a final written report, or both) but a formal meeting to present the report is highly recommended. With regard to content, at least the following topics should be covered:

  • Project information and basic project data
  • The assessment approach
  • Brief descriptions and observations of the project's practices (development process, project management, etc.)
  • Strengths and weaknesses, and if appropriate, gap analysis
  • Critical success factors or major project pitfalls
  • What the project team would do differently (improvements from the project team's perspective)
  • Recommendations

Tables 16.2 through 16.4 show report topics for three real assessed projects so you can relate the outcome to the points discussed earlier (for example, those on questionnaire construction). All three assessed projects were based on similar versions of the questionnaire in the Appendix. Project X is the software that supports the service processor of a computer system, Project Y is the microcode that supports a new hardware processor of a server, and Project Z is the software system that supports a family of disk storage subsystem products. The subsystem integrates hundreds of disk drives through a storage controller that provides redundant arrays of inexpensive disks (RAID), disk caching, devices emulation, and host attachment functions.

Table 16.2 shows the basic project data for the three projects. Table 16.3 summarizes the state of practices of key development and project management activities throughout the development cycle, and the project team's self-assessment of the importance of the specific practices. For most cells , the vignette of 'state of practice' and 'importance assessment' is shown. A gap exists where the importance assessment is Important or Very Important but the state of practice is No, Seldom, Occasionally, or Not Done.

The most gaps were identified for Project X, in the areas of requirements and reviews, specifications, design document in place to guide implementation, design reviews, effective sizing and bottom-up schedule development, major checkpoint reviews, staging and code drop plans, and in-process metrics.

For Project Y, gaps were in the areas of design document, design reviews, effective sizing and bottom-up schedule development, and in-process metrics. For Project Z , the gaps were in the project management areas such as sizing and bottom-up schedule development as they related to the planning process and project assumptions. Development/unit test was also identified as a key gap as related to the Cleanroom software process used for the project. The Cleanroom software process focuses on specifications, design and design verification, and mathematical proof of program correctness. For testing, it focuses on statistical testing as it relates to customers' operations profiles (Mills et al., 1987). However, the process does not focus on program debug and does not include a development unit test phase (i.e., from coding directly to independent test). Furthermore, questions on scalability (e.g., large and complex projects with many interdependencies) and the feasibility of implementations of customer operations profiles were raised by critics of the process. For Project Z, the use of this development process was a top-down decision, and faulty productivity and quality assumptions related to this process were used in schedule development.

Table 16.4 shows the project team's improvement plan as a result of the iterative emphasis of the assessment method. Note that the project team's own improvement ideas or plan are separate from the final recommendations by the assessment team, although the latter can include or reference the former.

Table 16.2. Basic Project Data for Projects X, Y, and Z

Project X

Project Y

Project Z

Size (KLOC)

  • Total

228

4000

1625

  • New and changed

78

100

690

Team size

10.5

35

225

Development cycle time (mo)

  • Design to GA

30

18

38

  • Design to development test complete

23

17

29

Team experience

Inexperienced 70% <2 yr

Very experienced 70% >5 yr some >15 yr

Very experienced 80% >5 yr

Cross-site development

Y

Y

Y

Cross-product brand development

Y

N

Y

Development environ/library

CMVC

PDL, Team Connect

AIX CMVC DEV2000

Project complexity (self-rated 10 pts)

7

8

10

Table 16.3. State of Practice, Importance Assessment, and Gap Analysis for Projects X, Y, and Z

Project Activity

Project X

Project Y

Project Z

Requirements reviews

-Seldom

-Very important

-Always

-Very important

-Always

-Very important

Develop specifications

-Seldom

-Very important

-Usually

-Very important

-Always

-Very important

Design documents in place

-No

-Very important

-No

-Very important

-Yes

-Very important

Design reviews

-Not done

-Very important

-Occasionally

-Very important

-Major pieces

-Very important

Coding standard/ guidelines

-No

-Yes

-Yes

Unit test

-Yes

- ad hoc

-Yes -No

Simulation test/environment

-No

-Very important

-Yes

-Very important

-No

-Important

Process to address code integration quality and driver stability

-Yes

-Very important

-Yes

-Very important

-Yes

-Very important

Driver build interval

-Weekly “

>Biweekly

-Biweekly with

fix support

-1 “ 3 days

Entry/exit Criteria for independent test

-Yes

-Very important

-Yes

-Very important

-Yes

-Important

Change control process for fix integration

-Yes

-Very important

-Yes

-Very important

-Yes

-Important

Microcode project manager in place

-Yes (midway through project)

-No

-No

Role of effective project management

-Very important

-Important

-Important

Effective sizing and bottom-up schedule development

-No

-Very important

-No

-Important

-No

-Very important

Staging and code drop plans

-No

-Very important

-Yes

-Very important

-Yes

-Important

Major checkpoint reviews

-No

-Very important

-No

Somewhat important

-Yes

-Very important

In-process metrics

-No (started midway)

-Very important

-No

-Very important

-Yes

-Somewhat important

Table 16.4. Project Teams' Improvement Plans Resulting from Assessment of Projects X, Y, and Z

Project

Improvement Plan

Project X

Requirements and specifications:

Freeze external requirements by a specific date. Create an overall specifications document before heading into the design phase for the components . Force requirements and specifications ownership early in the development cycle.

Design, code, and reviews:

Eliminate most/all shortcuts in design and code to get to and through bring-up. The code that goes into bring-up is the basis for shipping to customers and many times it is not the base that was desired to be working on. Establish project-specific development milestones and work to those instead of only the higher-level system mile-stones.

Code integration and driver build:

Increase focus on unit test for code integration quality, document unit test plans.

Test:

Establish entry/exit criteria for test and then adhere to them.

Project management (planning, schedule, dependency management, metrics):

Staff a project manager from the beginning. Have dependency management mapped to a specific site, and minimize cross-site dependency.

Tools and methodologies:

Use a more industry-standard toolset. Deploy the mobile toolset recently available on Thinkpad. Establish a skills workgroup to address the skills and education of the team.

Project Y

Requirements and specifications:

Focus more on design flexibility and considerations that could deal with changing requirements.

Design, code, and reviews:

Conduct a more detailed design review for changes to the system structure.

Test:

Better communications between test groups, make sure enough understanding by the test groups that are in different locations.

Project management (planning, schedule, dependency management, metrics):

Implement microcode project manager(s) for coordination of deliverables in combination with hardware deliverables.

Tools and methodologies:

Force the parallel development of test enhancement for regression test whenever new functions are developed.

Project Z

Requirements and specifications:

Link requirements and specifications with schedule and review schedule assumptions.

Design, code, and reviews:

Document what types of white box testing need to be done to verify design points.

Project Management (planning, schedule, dependency management, metrics):

Strong focus on project management, scheduling, staffing, and commitments. Periodically review schedule assumptions and assess impact as assumptions become invalid.

16.4.8 Summary

Table 16.6 summarizes the essential points discussed under each phase of the proposed software project assessment method.

Table 16.6. Essential Activities and Considerations by Phase of a Proposed Software Project Assessment Method

Phase

Essential Activities and Considerations

Phase ” continued

(1) Preparation

- As appropriate

- Gain understanding of business context and justification, objectives and constraints.

- Establish assessment project plan.

- Establish project charter and secure commitment.

- Request basic project data.

- Develop an initial set of questions based on information available thus far, or use an existing. questionnaire.

- Establish assessment project closeout plan.

(3) Phase 3 - possible improvement opportunities and recommendations.

- Review findings and improvement frameworks in the literature.

(2) Facts gathering ” phase 1

- Build a detailed project review from the project team's perspective.

- Focus on whats and hows; at times on whys via probing.

- Formulate ideas at the end of project review.

(4) Questionnaire customization

- Customize to project being assessed.

- Use vignette-question approach for gap analysis.

- Define strengths and weaknesses.

- Gather team's improvement ideas.

- Design questionnaire to include questions on improvements from the project team.

(5) Facts gathering ” phase 2

- Administer questionnaire to project personnel.

- Validate responses.

- Triangulate across respondents and with information gathered from Phase 1.

- Brainstorm project strengths and weaknesses; ask what the project team would have done differently.

- Formulate whole list of recommendations - actions for immediate improvements and for strategic directions.

(6) Team discussions feedback

- Review with project team and assessment results and draft recommendations.

- Finalize recommendations.

(7) Reporting and closeout

- Complete final report including recommendations

- Meet with assessment executive sponsor and management of assessed project.

 

What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire



Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net