Any quantitative control of a project depends critically on the measurements made during the project. To perform measurements during project execution, you must plan carefully regarding what to measure, when to measure, and how to measure. Hence, measurement planning is a key element in project planning. This section discusses the way standard measurements are done in projects at Infosys. Project managers may add to these measurements if their projects require it.
To help a project manager monitor the effort, each employee records in a weekly activity report (WAR) system the effort spent on various tasks. This online system, developed in-house, stores all submitted WARs in a centralized database. Each person submits his WAR each week. On submission, the report goes to the individual's supervisor for approval. Once it is approved, the WAR submission is final and cannot be changed. Everyone submits a WAR, including the CEO, and if a WAR is not submitted within a given time period, leave is deducted.
A WAR entry consists of a sequence of records, one for each week. Each record is a list of items, with each item containing the following fields:
Program code
Module code
Activity code
Activity description
Hours for Monday through Sunday
The activity code characterizes the type of activity. The program code and module code permit separation of effort data with respect to modules or programs, a consideration that is important for component-level monitoring. To support analysis and project comparisons, it is important to standardize the activities against which effort is reported. Having a standardized set of activity codes helps to achieve this goal. Table 7.1 shows the activity codes used in Infosys projects. (These are different from the ones given in my earlier book because the codes were changed with the introduction with a new Web-based WAR system.)
In the activity codes, a separate code for rework effort is provided for many phases. This classification helps in computing the cost of quality. With this level of refinement, you can carry out a phase-wise analysis or a subphase-wise analysis of the effort data. The program code and module code, which are specified by the project, can be used to record effort data for different units in the project, thereby facilitating unit-wise analysis.
To facilitate project-level analysis of planned versus actual effort spent, the WAR system is connected to the Microsoft Project (MSP) depiction of the project. Project staff can begin submitting WARs for a project only after the MSP for the project has been submitted (once the MSP is submitted, the system knows which people are supposed to be working on the project). Planned activities are defined as those listed in the MSP and assigned to an authorized person in the project. Unplanned activities are all other project activities.
When entering the WAR for a week, the user works with a screen that is divided into two sections: planned activities and unplanned activities. All activities that are assigned in the MSP to a particular person for this week show up in her planned activities section for that project. The user cannot add or modify activities that show up in this section. She can enter only the hours spent each day for the different activities provided. To log the time spent on activities not listed in the planned section, the user can enter a code, its description, and the hours spent each day on these activities in the unplanned section for the project.
In an Infosys project, defect detection and removal proceed as follows. A defect is found and recorded by a submitter. The defect is then in the state "submitted." Next, the project manager assigns the job of fixing the defect to someone, usually the author of the document or code in which the defect was found. This person does the debugging and fixes the reported defect, and the defect then enters the "fixed" state. A defect that is fixed is still not closed. Another person, typically the submitter, verifies that the defect has been fixed. After this verification, the defect can be marked "closed." In other words, the general life cycle of a defect has three states: submitted, fixed, and closed (see Figure 7.2). A defect that is not closed is also called open.
Table 7.1. Activity Codes for Effort | |
Activity Code | Description |
PAC | Acceptance |
PACRW | Rework after acceptance testing |
PCAL | Project catch-all |
PCD | Coding and self unit testing |
PCDRV | Code walkthrough/review |
PCDRW | Rework after code walkthrough |
PCM | Configuration management |
PCOMM | Communication |
PCSPT | Customer support activities |
PDBA | Database administration activities |
PDD | Detailed design |
PDDRV | Detailed design review |
PDDR | Rework after detailed design review |
PDOC | Documentation |
PERV | Review of models and drawings |
PERW | Rework of models and drawings |
PEXEC | Execution of modeling and drafting |
PHD | High-level design |
PHDRV | High-level design reviews |
PHDRW | Rework after high-level design review |
PIA | Impact analysis |
PINS | Installation/customer training |
PIT | Integration testing |
PITRW | Rework after integration testing |
PPI | Project initiation |
PPMCL | Project closure activities |
PPMPT | Project planning and tracking |
PRES | Research on technical problems |
PRS | Requirement specification activities |
PRSRV | Review of requirements specifications |
PRSRW | Rework after requirements review |
PSP | Strategic planning activities |
PST | System testing |
PSTRW | Rework after system testing |
PTRE | Project-specific trainee activities |
PUT | Independent unit testing |
PUTRW | Rework after independent unit testing |
PWTR | Waiting for resources |
PWY | Effort during warranty |
A defect control system (DCS) is used in projects for logging and tracking defects. The system permits various types of analysis. Table 7.2 shows the information that is recorded for each defect logged in to the system.
To determine the defect injection stage requires analysis of the defect. Whereas defect detection stages consist of the review and testing activities, defect injection stages include the stages that produce work products, such as design and coding. Based on the nature of the defect, some judgments can be made about when it might have been introduced. Unlike the defect detection stage, which is known with certainty, the defect injection stage is more ambiguous; it is estimated from the nature of the defect and other related information. Using stage injected and stage detected information, you can compute the defect removal efficiencies, percentage distributions, and other metrics.
Sometimes it is desirable to understand the nature of defects without reference to stages, but rather in terms of the defect category. Such a classification can help you to understand the distribution of defects across categories. For this reason, the type of defect is also recorded. Table 7.3 shows the types of defects possible, along with some examples. A project can also define its own type classification.
Table 7.2. Recording Defect Data | ||
Data | Description | Mandatory/Optional |
Project code | Code of the project for which defects are captured | M |
Description | Description of the defect | M |
Module code | Module code | O |
Program name | Name of program in which the defect was found | O |
Stage detected | Stage in which the defect was detected | M |
Stage injected | Stage at which the defect was injected/origin of defect | M |
Type | Classification of the defect | M |
Severity | Severity of the defect | M |
Review type | Type of review | O |
Status | Current status of the defect | M |
Submitter | Name of the person who detected the defect | M |
Owner | Name of the person who owns the defect | M |
Submit date | Date on which the defect was submitted to the owner | M |
Close date | Date on which the submitted defect was closed | M |
Finally, the severity of the defect is recorded. This information is important for the project manager. For example, if a defect is severe, you will likely schedule it so that it gets fixed soon. Also, you might decide that minor or unimportant defects need not be fixed for an urgent delivery. Table 7.4 shows the classification used at Infosys.
From this information, various analyses are possible. For example, you can break down the defects with respect to type, severity, or module; plot trends of open and closed defects with respect to modules, severity, or total defects; determine the weekly defect injection rate; determine defect removal efficiency; determine defect injection rates in different phases, and so on. In Chapter 11 you will see some uses of this data for monitoring the quality dimension and for preventing defects. That chapter also describes an example of the defect data entered in the case study.
Table 7.3. Defect Types | |
Defect Type | Example |
Logic | Insufficient/incorrect errors in algorithms used; wrong conditions, test cases, or design documents |
Standards | Problems with coding/documentation standards such as indentation, alignment, layout, modularity, comments, hard-coding, and misspelling |
Redundant code | Same piece of code used in many programs or in the same program |
User interface | Specified function keys not working; improper menu navigation |
Performance | Poor processing speed; system crash because of file size; memory problems |
Reusability | Inability to reuse the code |
Design issue | Specific design-related matters |
Memory management defects | Defects such as core dump, array overflow, illegal function call, system hangs, or memory overflow |
Document defects | Defects found while reviewing documents such as the project plan, configuration management plan, or specifications |
Consistency | Failure to updating or delete records in the same order throughout the system |
Traceability | Lack of traceability of program source to specifications |
Portability | Code not independent of the platform |
Table 7.4. Defect Severity | |
Severity Type | Explanation for Categorization |
Critical | Defect may be very critical in terms of affecting the schedule, or it may be a showstopper that is, it stops the user from using the system further. |
Major | The same type of defect has occurred in many programs or modules. We need to correct everything. For example, coding standards are not followed in any program. Alternatively, the defect stops the user from proceeding in the normal way but a workaround exists. |
Minor | This defect is isolated or does not stop the user from proceeding, but it causes inconvenience. |
Cosmetic | A defect that in no way affects the performance of the software product for example, esthetic issues and grammatical errors in messages. |
Measuring schedule is straightforward because you use calendar time. The detailed activities and the schedule are usually captured in the MSP schedule, so the estimated dates and duration of tasks are given in the MSP. Knowing the actual dates, you can easily determine the actual duration of a task.
If the bottom-up estimation technique is used, size is estimated in terms of the number of programs of different complexities. Although this metric is useful for estimation, it does not permit a standard definition of productivity that can be meaningfully compared across projects. The same problem arises if lines of code (LOC) are used as a size measure; productivity differs with the programming language. To normalize and employ a uniform size measure for the purposes of creating a baseline and comparing performance, function points are used as the size measure.
The size of delivered software is usually measured in terms of LOC through the use of regular editors and line counters. This count is made when the project is completed and ready for delivery. From the size measure in LOC, as discussed before, size in function points is computed using published conversion tables.12