13.2 Building a Measurement Process

13.2 Building a Measurement Process

The initial focus in the development of a measurement system should be on the process itself. There are some key requisites for the measurement process to succeed, including:

  • The measurement activity should be transparent.

  • The actual measurement tools should be pluggable into this process.

  • The measurement data should reside in a common repository or database.

  • Information from the database should be restricted to a need-to-know basis.

  • Software measurement will have to earn a position in the software development process.

Any measurement process that demands extra resources allocated to the measurement function will not succeed. Everyone in the software development process is painfully aware that their resources are stretched to the limit with an already impossible task and impossible deadlines. Any additional workload imposed by the measurement activity will be sufficient reason to slip delivery deadlines or eliminate the measurement activity itself. At the outset, there is no perceived benefit for software measurement by the troops on the frontlines of software development. No activity that demands resources without immediate payoff will last for long in this environment. Unfortunately, there will be a significant latency between the start of the measurement process and the first significant return on this measurement investment. Measurement is completely unnecessary and unwanted in the world of the software craftsman. It will have to be bootlegged in its incipient stages. If the measurement activity is embedded in the day-to-day operations of the software development processes, it can be made invisible.

13.2.1 Building an Initial Measurement System

Every system and every process has a beginning. Perhaps the best beginning for a software engineering measurement system is to begin by measuring source code. This will be the least disruptive of just about any other thing that can be measured. We will prime the measurement pump by building the first software measurement tool for our software. The primitive software metrics discussed in Chapter 5 would be a very good start for this process. Actually, the size metrics discussed in this chapter will be an adequate start. It should take less than 100 staff hours of one or two competent programmers to build such a tool. An alternative to building a tool would be to acquire one from the public domain of open source software. The downside of this strategy is that the tool must be certified to meet the standard for the appropriate enumeration of operators, operands, statements, etc. It is quite likely that each tool will have its own standard. These tools must be rebuilt to conform to the evolving measurement standard suggested in Chapter 5.

The initial measurement tool can be regarded as a cipher. Its real value is to hold the place for a more competent tool that will be added later. For the moment, the sole objective is to get the measurement activity going. It is far better to collect data on a very few valid and standard metrics than it is to collect a lot of metrics of highly questionable value. Our basic measurement tool will collect its metric data at the source code module level. The tool will be integrated into the source code control system, such as the RCS or SCCS that is used for a project. Every time a source code module is checked into RCS, our metric tool will be invoked. The module metric data will be captured at the point of the save. There are many different source code control systems in place today other than RCS. The important aspect of the measurement tool is that the code be measured for every change. The main objective of this level of integration of the measurement tool with the configuration control system is that the code measurements be as current as the source code. There is no latency of measurement. The current measurement data reflects the current state of the source code base.

Let us now turn our attention to the data that is available at the point that each module is measured. First of all, we have the name and revision number of the module from RCS. Second, we have a vector of metrics data for the module. Third, we know who made the changes. Fourth, we can know the date and time of the change. Finally, we can know why the changes were made. If the changes were made in response to a program trouble report (PTR), we can capture the trouble report number at this point. If the change was made in response to a particular change request (CR) document, we can capture the change request document number. All changes to code after the initial check into RCS will be made for exactly one of these two reasons. We can summarize the data available at each measurement point as shown in Exhibit 2. This data is bound together for each measurement event. It will be placed into our measurement database.

Exhibit 2: Module Version Data

start example
  • Module_Name

  • Date

  • Time

  • Developer_Name

  • Revision_Number

  • CR or PTR_Number

  • Vector of Source_Code_Metrics

end example

The next step in the measurement process development will be to take our simple measurement database and put in place processes that will convert the data stored there to information. The first thing we might want to do is to get a handle on the rate of change and code churn in the database. To do this we will design a process to capture build information in the database. Each build can be characterized by the information shown in Exhibit 3. We will now implement processes to ensure that this data is added to the measurement database. If we know nothing more than the date and time of each build, we can easily reconstruct which modules actually went to the build from the code repository.

Exhibit 3: Build List

start example
  • Build_Number

  • Build_Date

  • Time

  • Build_List of

  • Module_Name

  • Revision_Number

end example

With the build data and the source code module metric data alone, we have a tremendous information capability at hand. We can actually increase the utility of this data if we are able to baseline the data so that we can easily compare across builds. In Chapter 6, the fault index (FI) measure was introduced to serve as a fault surrogate. In Chapter 8 we learned how we could use the FI metric, or some other suitable quality criterion metric, to compare data across builds. To that end we will now create a baseline build record to store the necessary information for a baseline build. This record will contain the data shown in Exhibit 4.

Exhibit 4: Baseline Data

start example
  • Baseline_Build_Number

  • Vector of Metric_Means

  • Vector of Metric_Standard_Deviations

  • Vector of Metric_Eigenvalues

  • Matrix of Metric_Transformation_Coefficients

end example

We will obtain the baseline data through the statistical analysis of the modules that comprise the baseline build. The means and standard deviations are simple descriptive statistics. The eigenvalues and transformation coefficients can be obtained from a principal components analysis (PCA) of the metric data in the baseline build.

The initial metric data collected by the initial measurement system will be far from comprehensive. The focus of this initial stage should be on the processes surrounding the measurement devices, and on the storage and maintenance of the data that streams from the measurement processes.

13.2.2 Building a Measurement Reporting System

The measurement processes discussed in the previous section will generate an astonishing amount of data. These data must be sent to the measurement database as they are generated as part of the automated measuring process. The next logical step is to build the processes that will convert these data into information. The first such process will be one that will build FI values for each of the versions of each of the modules in the database. The next logical step is to build a set of SQL procedures to build new and meaningful relationships among the data elements.

The baseline build is not necessarily a static position in the build sequence. The baseline build is merely a reference point. It can be changed at any point. However, once the baseline build is changed, then the FI values must be recalculated for all versions of the modules. FI, remember, is constructed from the baseline means, standard deviations, and eigenvalues of the baseline build. If this baseline build changes, so too will the statistics associated with the build. Also, it must be remembered that each build will probably differ in terms of the sets of modules that comprise the build.

After very few builds of a moderate-sized software system there will be an astonishingly large amount of data in the database. We will now turn our attention to the process of converting these data to information. From a global perspective, the first thing we might wish to know is just how the system has changed from its inception to the present. The code churn value discussed in Chapter 8 is a very good indicator of the total system code change. We can compute each system build code churn for any two builds by taking the module versions from the build list of the two builds in question and computing the difference in FI for each module pair in the two builds. In doing so, we must remember that, in all likelihood, there will be some modules in the first such build that are not in the second, and vice versa. Code churn values can be established for any two builds in the database. It would be logical to create some standard SQL procedures for computing the code churn for all sequential build pairs in the database. We can then see, at a glance, how much the system has changed over its entire development history.

From a global or system perspective, the FI values for all modules in each build can be summed. For the baseline build, the average FI was set to 100. If there are 1000 modules in this build, then the total system FI would be 100,000. If, on a subsequent build, the total system FI increases, then it will be apparent that the net complexity and hence the net fault burden of the system are increasing. Similarly, we can sum the code churn values for all modules in each build. These code churn modules would represent the net change in a system from one incremental build to the next. If a system is relatively stable and has changed little from one build to the next, then the net code churn will tend to zero. A system that has a large net code churn is far from being ready for deployment.

We are now, for the first time, in a position to track process measures. Faults are introduced into software systems by people operating under formal or informal software development processes. What we would really like to know is the actual rate at which these faults are being introduced. In Exhibit 2 we now have two relevant pieces of data: a module version number and a PTR number. With the module version number we can go to the source code control system and retrieve the changes that we made between the version number associated with the PTR number and the previous version of the module in the system. This will permit the actual number of faults to be determined, as per the discussion in Chapter 5. For each PTR and associated module version number, it is possible to compute the total fault count for each PTR. Once the specific lines of source code associated with each PTR have been identified, it is then possible to track backward through the source control system to find exactly when those lines first appeared in the system.

From Chapter 8 we realize that there are two distinct sources of faults in any system. There are those faults that were put into the code during the initial software development prior to the first system build. These faults are put into the code by people following regular institutionalized software development processes. The second type of fault is one that was introduced later in the software evolution process by change activity. This can be a very different type of process from the initial software development process. In Chapter 5 we introduced two proportionality constants, k and k', that represented the relative proportion of faults introduced during the initial development process (k) and the relative proportion of faults introduced by subsequent maintenance activity (k'). In the case of both proportionality constants, a smaller value represents a better software process. This provides a solid foundation to measure the relative effect of a change in software process that can be introduced during the development of any system. If a new software process is so introduced, there should be a diminution in the maintenance activity constant k'. Let represent the initial proportionality constant for the initial software process. When a new software maintenance process is introduced, there will be a new proportionality constant, . If the new process does, in fact, represent an improvement over the old one, then we would expect .

In Exhibit 2, we recorded the developer name. This is absolutely the most inflammatory information in the metric database and it is the most easily misused. It must be remembered that people introduce faults into code because they are following specific processes, either formal or informal. The process directly determines the rate at which people introduce faults. When software systems are late in shipping or have reliability problems, it is far too easy to comb the database for an individual to "sack." Individuals should never be the subjects of scrutiny. There is exactly nothing to be gained in the analysis of information from a single individual. Individuals do not fail; processes do. There is nothing that can create failure of an incipient software measurement system faster than to use this data to analyze individual performance.

There are really two entwined issues relating to developer introduction of faults. The first relates to variability. If there is a very tightly controlled software development process with equally tight audit procedures built into it, then there will be little or no variation in the rate at which different developers introduce faults into code. The very first sign of a poor software development process is visible in the variation in fault rate across developers. This is perhaps the most singularly important information that we can extract from our measurement database to this point. It would be pointless, and indeed meaningless, to attempt to introduce a change in an already bad process. Before new software development processes are considered, we must first seek to gain control of the current processes. Our first clear indication that we have begun to gain the high ground of good software process is that the variation in rate of fault introduction across individual developers is very low. Then, and only then, can we begin to consider changing the underlying software development process. Thus, the first statistic that we will extract from our new measurement database that involved people will be the variation in the individual rate of fault introduction. It is relatively easy to obtain this information. We have at our disposal all the modules that have been changed by an individual. This can be obtained from the source control system. We also have at our disposal a very precise means of determining exactly which code elements have been changed by this person. The FI values of each of the program modules will allow us to compute the code churn for these changes. Of these changes, we can easily measure the number of faults introduced by this individual during the change process. Again, we must accept the fact that the rate of fault introduction by a single individual has little or no meaning. It is the variation in rate across all developers that will tell us how much work we will have to do to eliminate these sources of variation.

It is very difficult to gain control of a process that has run amok. A process clearly out of control is one that has a large variation in the rate of fault introduction across individuals. As an example of such a bad process, we can imagine a system in which a single individual consistently forgets to check the range of a divisor, resulting in code with zero divide exceptions. Another developer consistently forgets to free memory for data structures no longer in use, resulting in memory leaks. Yet another developer consistently forgets to initialize pointer values. There is absolutely no excuse for a zero divide exception. One way to gain control of this problem is to fire the guilty party. We could also use this same strategy to eliminate memory leaks. Sooner or later, we would have no staff left and our rate of fault introduction would fall to zero. Regrettably, so too would our code production. Zero divide exceptions are very easy to deal with at the process level. Perhaps one of the easiest ways to eliminate them is to institute inspection and review processes that will ensure that there is adequate range checking for each divisor before the divide operation is performed. It is very reasonable to assume that if one divide operation produced a divide exception, then every other divide operation in the program is an equally likely candidate for a divide exception. They must all be examined. Any new division operations that are subsequently introduced will be subject to the same level of review.

The second issue of a developer's rate of fault introduction as a measure of process is the variation within each developer's rate. Not all faults are equal. In Chapter 5 we learned that there can be many structural elements in code that will have to change to fix a single problem. By the same token, there can be relatively little code that will have to change in response to a PTR. This means that there may be substantial within-developer variation. Some code that a particular developer writes may be fraught with faults, while other code may be quite clean. Again, this is a process issue. There are many different attributes that a code module can have. Some modules are compute intensive with substantial mathematical computation; some modules are highly interactive with the operating system; and other modules are I/O intensive. It is reasonable to believe that if we assign a developer who is mathematically naive to a task of coding a complex numerical algorithm, that person is doomed to struggle through the coding task with great difficulty. People have attributes just as do the program modules. Matching people to tasks is a very simple measure that can be taken to eliminate within-developer variation.

There are many other ways in which the data in Exhibits 2 through 4 can be combined to produce very useful management information. If the data is precise and accurate, then the information that can be extracted from this data will be meaningful. At this early stage of building a measurement system, considerable attention must be given to the nature of the reporting process.

13.2.3 Measurement for Testing

Once the processes for gathering static information about the code base, code evolution, and preliminary developer information are in place, the next logical place to focus is the test process. As indicated in Chapter 11, static source code measurement is like one-hand clapping. Perhaps the greatest utility of the static measures of the code base will come in the testing phase. Here we would like to install the basic elements of a system that will collect information on test activity. This will be the dynamic measurement system.

There are many different levels of granularity that can be used for dynamic granularity. Consistent with the level of granularity of source code measurement, all source code should also be measured at the module level. Each test activity will exercise a subset of the code. The code will be suitably instrumented to generate the test execution profile. As discussed in Chapter 10, the test execution profile is simply a vector equal in length to the cardinality of the set of all modules that have ever been in any build. Each module in every build will have a unique integer identifier that will distinguish it from every other module. As each test is run, each module exercised by that test will cause a corresponding element in the test execution profile to be incremented every time the module is entered. At the conclusion of the test, the test execution profile will contain the frequencies of execution of each module exercised by that test. These data, together with the test case, possible test ID, test date, build number, and tester, will be added to the measurement database. These new data elements are shown in Exhibit 5.

Exhibit 5: Test Execution Data

start example
  • Test_ID

  • Test_Date

  • Test_Case_ID

  • Test_Case_Version_Number

  • Test_Time

  • Tester_Name

  • Build_Number

  • Test_Execution_Vector

end example

With these data, we can now compute the test statistics presented in Chapter 11. First would be test entropy. A low entropy test will concentrate its energies on a small number of program modules, whereas a high entropy test will distribute its attention to a much broader range of modules. A test execution profile can be computed using the test execution vector. This execution profile will show the proportion of time spent in each program module during the test activity. These data, in conjunction with code churn data, can be used from a delta testing perspective to compute both the test effectiveness and test efficiency of the specific test represented by the Test_ID.

At any point, the test execution vectors for all tests to date can be summed into a single cumulative test execution frequency vector (CTEFV), as discussed in Chapter 11. From the CTEFV, a cumulative execution profile can be computed, spanning all test activity to date. These data, together with the FI values for the current build number, will yield cumulative measures of test effectiveness and efficiency.

Each test activity will also have a binary outcome. Either a failure was noted while the code was run or no failure occurred. If the test failed, then the failure event will be resolved to a fault in a particular program module. Directly corresponding to the CTEFV we also have the failure frequency vector (FFV), as discussed in Chapter 12. Each time a failure is resolved to a particular program module, the element of the FFV corresponding to the program module will be incremented by one. From these data we can then work out the current reliability of the system for a given operational profile.

Thus, the dynamic metric data will permit us to evaluate each individual test and the cumulative test activity for the entire system. In addition, we will also have a working estimate of the overall system reliability.

13.2.4 Requirements Tracking System

Chapter 9 introduced the notion of tracking in the evolution of software requirements. This tracking system should be online and accessible to all software developers. For the purposes of this discussion, we are going to include this management system in our measurement database. Of interest to us from the measurement perspective are the system operational specifications as shown in Chapter 9, Exhibit 14; the system functional specifications as shown in Chapter 9, Exhibit 17; and the design module specification as shown in Chapter 9, Exhibit 4. The current requirements definitions are a necessary and vital part of the software testing system.

13.2.5 Software Test System

The basic objective of the software testing process is to ensure that the software meets the basic quality standards. On the quality side, the reliability of the software is of great importance to the software test activity. In particular, there must be some assurance that the software will not fail in a predetermined user environment. As previously discussed, it is a very unrealistic goal of the test process that all software faults will be found and eliminated. New faults are continually being added to systems as they evolve, even as old faults are found and removed. What is important is that the faults that remain are quiescent. The typical user will not expose them in his or her use of the system. Also of interest is the fact that the system can be certified to meet the nonoperational and nonfunctional requirements articulated in the software requirements specification.

13.2.5.1 Delta Testing.

Chapter 11 introduced the notion of delta testing. At each new build, code will be added or deleted. Each of these changes will create the opportunity for new faults to be introduced into the code. The likelihood of introduction will be proportional to the measure of code churn. For each build it is easy to extract from the measurement system the values of code churn for that build. This will show the distribution of the changes that have been made to the code. The code churn, , between builds k and k+1 will clearly reflect where the greatest changes have occurred. From Chapter 11, if we were to execute the best possible test for this changed code, we would construct an execution profile that will spend the majority of its time in the functionalities that contain the modules that have changed the most from one build to the next. Let

This is the total code churn between builds k and k+1. For delta test purposes we would like to exercise each module in proportion to the change that has occurred in the module during its current revision. Again, we will compute this proportion as follows:

The distribution of clearly shows where we must distribute our test activity to create maximum exposure for the faults that may have been introduced on the new build n+1. If each of the modules were exercised in proportion to the amount of change that they had received, we would maximize our exposure to software faults to these new faults.

Constructing tests that will distribute test activity to modules in a particular fashion is a very difficult to impossible task to accomplish without a suitable measurement database. Fortunately, in our measurement database we will have the requirements specifications that show the mapping between functionalities (and operations) and specific program modules. With this mapping information it will be relatively simple to design tests that will cause specific modules to be executed.

13.2.5.2 Functional Testing.

A functional test of a software system is a test activity designed to exercise one or more functionalities. To initiate this process, test cases will be derived from the program functional specifications. Each test case will evolve just as the requirements evolve. The test case specification elements are shown in Exhibit 6. These specifications should be maintained by a configuration control system in the same fashion as the source code and the software requirements specifications. It is clear that the functionalities will change over time. As these functionalities change, the test cases related to the functionalities must also change. Some functionalities may be deleted. Those test cases that reference these deleted functionalities must either be modified or deleted. The entire software structure of requirements and code is a living system. It is in a constant state of change. This implies that there is no such thing as a standard test case. Test cases are evolving documents as well.

Exhibit 6: Functional Test Case Specification

start example
  • Test_Case_ID

  • Test_Case_Name

  • Test_Setup

  • Input_Data_Requirements

  • Execution_Steps

  • Expected_Outcome

  • Functionalities_Vector

  • Functionality_Requirement_Number

  • Functionality_Version_Number

end example

Each test case will have a test case ID and name. It will also have the canonical test information, such as test setup information, input data requirements, execution steps, and expected outcomes. These data will be placed under some type of document revision control system. The functionalities vector contains one or more pairs of numbers representing the functionalities that the test case is design to exercise. Whenever any of the functional requirements represented in the functionalities vector for each test case are updated, so too should the corresponding test cases be updated. Whenever a test case is modified, it may well represent a new test activity altogether.

Each functional test case will generate a test execution profile when it is executed. These data are, of course, stored in the test execution data on the database, as is shown in Exhibit 5. It is vital that the test execution data shown in this table distinguish between the test case versions.

In addition to the set of functionalities, there may also be a number of nonfunctional requirements that the software must meet. Therefore, test cases must be constructed that will assess these requirements. The documentation for these test cases would be very similar to that of Exhibit 6.

13.2.5.3 Operational Testing.

An operation test of a system is a test activity designed to exercise one or more system operations. The case for managing the test cases for operational testing is very similar to that of functional testing. The requisite data for each of these tests are shown in Exhibit 7. Again, these data are dynamic. As was the case for the functional test case specification, the operation test case specification also contains a vector of operations exercised by the test case. As each operation is changed as the operational requirements evolve, the corresponding test cases must also be updated to contain the version data from the affected operations.

Exhibit 7: Operational Test Case Specification

start example
  • Test_Case_ID

  • Test_Case_Name

  • Test_Setup

  • Input_Data_Requirements

  • Execution_Steps

  • Expected_Outcome

  • Operations_Vector

  • Operation_Requirement_Number

  • Operation_Version_Number

end example

Associated with each operation test case specification is, of course, the test case ID. These ID numbers will permit the test case results to be parsed correctly to their functional or operational test execution profiles. Also, as was the case with the functionalities, there may be a family of nonoperational specifications. This will mandate the certification of the software to these nonoperational specifications. The test cases for the nonoperational specifications would be very similar in nature to the contents of Exhibit 7.

13.2.5.4 Software Certification

The final stage of the test process is the certification process. Prior to the design of the software, it is imperative that we understand just how it will be used when it is deployed. The essence of this use is embodied in the operational profile. A typical user will not randomly distribute his activities across the suite of operations. He will distribute his activities according to an operational profile. Prior to the delivery of any software system, the final test exercise should validate that the system will perform flawlessly under the projected operational profile. That is the first step. Test cases should be developed that will mimic this operational profile as closely as possible.

The second step in the software certification process is the validation of the operational profile. As the software system is deployed to the field for a limited beta test, each system should be instrumented so that the actual operational profile of the system in the field can be closely monitored. If the actual use of the system in the field is consistent with the projected operational profile, the software will have passed the final level of certification. If, on the other hand, the field operational profile is significantly different from the design operational profile, then the software must be recertified for the actual or observed operational profile.

There will, of course, be no standard operational profile. Each user will use the system in a slightly different manner. That is, there will be some variation in the operational profile from user to user. As per the discussion in Chapter 10, we can measure the distance between each user's observed operational profile and the design operational profile. If the average distance between the observed behavior in the field and the design operational profile is sufficiently large, then the certification process must be reinitiated. Similarly, if the observed variation in the departures from the design operational profile is too large, Var(d), the software must be reevaluated from the standpoint of its robustness.

13.2.6 Program Trouble Reporting System

In order that faults might be tracked correctly, there must be some mechanism to report problems that develop as a software system evolves. There are several ways that problems in the execution of the code can occur. In every case, however, the program will fail to perform its execution of the requirements correctly. The initial purpose of the program trouble reporting (PTR) system is to document the departure of the program execution from an established standard. Once the departure (or apparent departure) of the program from standard behavior is observed, the problem must be reported for subsequent analysis.

At the point when a problem is evident, certain information must be captured. The initial capture data is shown in Exhibit 8. First, let us observe that anyone in the universe can report a problem, not just testers or developers. In the resolution process, not all PTRs will result in one or more faults being recorded. Many times, the person reporting the apparent problem will incompletely understand the specified behavior of the system.

Exhibit 8: The Program Trouble Report

start example
  • System_Name

  • PTR_ID

  • Build_Number

  • Reporter_ID

  • Date

  • Time

  • Test_Case_ID

  • Failure_Analysis

  • Severity

  • Priority

  • Apparent_Cause

  • Resolution

end example

The purpose of the PTR is to report apparent or incipient failures in the system. The PTR tracking system will track trouble reports, perhaps on multiple systems. As each PTR is entered on the system, it is automatically assigned a unique PTR_ID by the tracking system. For the purposes of the measurement system, this identification number is the most important data that will be collected. It will link all subsequent change activity that will spin from this PTR. The next most important item will be the system build number. Each build is characterized by a set of specific code modules, functionalities, and operations. As the features of a system change from build to build, the problem reported in the PTR may become obsolete with regard to the latest system build.

There are four different types of source documents that must be managed in the software development process: (1) the operations requirements specification, (2) the functional requirements specifications, (3) the low-level design elements, and (4) the source code. A PTR may be filed against any of these four systems. The code may be working perfectly correctly according to its design. The system may be functionally correct. It may not, however, be doing what the user wants it to do, in which case a PTR would be filed and the appropriate operational specifications would be altered to reflect the user's requirements. In this case, the fault is in the operational specification, not in the code or functionality of the system. Assuming that the code is working as specified, it will not change under a PTR. Only the system operational specification will change under the PTR. All other documents will change under a change request. Simply stated, a PTR will resolve to a fault in one of the four types of documents. If other system documents are altered, they must be altered under a change request.

13.2.7 Program Change Request System

Programs will change as a result of evolving requirements changes in either operations or functionalities. Operations and functionalities can be added, modified, or removed from the system. If a new operation is added to the system, necessarily there must be at least one new functionality added, plus at least one new program module. Similarly, if there is one new functionality added to the functional specifications, then there must be a least one module added to the design and source code as well. This will also mean that at least one operational requirement must change as well. Any changes to operations, functionalities, design, or source code elements must reference the change request (CR) number, as shown in Exhibit 9.

Exhibit 9: The Change Report

start example
  • System_Name

  • CR_ID

  • Build_Number

  • Requester_ID

  • Date

  • Time

end example

After the first build of any system, any alteration of any system document should have a reason for that alteration. If the document is flawed and contains one or more faults, then the revision control data for the new version must reflect either a PRT or a CR. No change to any document should be made without the authorization of a review process and a PTR or CR. This may seem like an overly bureaucratic process but is a necessary part of the engineering discipline that must be instituted for a successful software engineering program to be established. In the construction of modern aircraft, it would be absolutely unthinkable for a worker on the assembly floor to make an undocumented change in an aircraft airframe. It should be equally unthinkable that a software developer should be empowered to make undocumented changes to a software system.

13.2.8 Measuring People

It is not recommended that people attributes be part of an initial software development measurement system. The potential for misuse of these data is far too great to justify any potential value they might have. It is too easy, for example, to look at people as the source of faults in code. It is really very simple to find out which developers, designers, or requirements analysts contributed which problems to the system. This information, for immature systems, will inevitably contribute to a witch-hunt. Faults are, indeed, put into documents by people. However, it is the processes followed by those people that caused them to put the faults in the code and leave them there. It is the process that failed to trap the potential fault before it created problems in the design or in the code. In the initial stages of the development of or measurement system, the entire focus should be on process issues — not people.



Software Engineering Measurement
Software Engineering Measurement
ISBN: 0849315034
EAN: 2147483647
Year: 2003
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net