Code Metrics

Code Metrics

Today, a number of different tools exist to measure code metrics. Metrics are available to measure many aspects of code, from code size , to code complexity, to errors, and other code performance metrics. One of the first developers of coding metrics was Thomas McCabe, who in 1976 developed a mathematical technique, the cyclomatic complexity metric, which measures the control structure of software. This metric characterizes software in terms of numerical measures of complexity. Using such metrics, it is possible to identify software modules that would be difficult to test or maintain.

Since McCabe developed the first software metrics, a number of other types of software metrics have been developed, including:

  • Halstead metrics

  • Line count metrics

  • Object-oriented metrics

  • Boolean metrics

No matter what metrics you decide to use, here are some common traps to avoid when implementing a software metrics system:

  • Lack of management commitment:

    Management commitment is critical to the success of any metrics program. If the reasons behind the metrics program are not explained to the developers in a clear fashion, they are not likely to support the program. The management team should not only verbally support any metrics program, they should make it clear that metrics will be a requirement of job performance. Of course before managers can express to developers why a good metrics program is being required, they must first be educated themselves as to the value of software measurement. A good resource is the book Practical Software Metrics for Project Management and Process Improvement.

  • Lack of communication and training:

    Communication as to why a metrics program is being used is crucial to the acceptance of such a program by developers. Developers need to view the metrics program as a step toward improving the quality of the software project, not as a personal measurement of productivity. Once the reasons for the metrics program are clearly communicated, individual developers will need to be trained on the metrics tools being used in order to make sure the right data is collected at the right time.

  • Measuring the wrong thing at the wrong time:

    Some metrics programs go overboard and start collecting hundreds of metrics on day one. When too many metrics are collected too early in a program, the large amount of data that results can often be meaningless to management. On the other hand, if you start out collecting only one or two metrics, the results may be similarly ambiguous. The right solution is to decide what aspects of the development team's work is most important to understand in order to meet the business goals of the organization and then start by collecting a sample of related metrics. As you gain experience with these metrics, you can then add or remove individual measurements depending on the usefulness they are providing.

  • Using metrics data as part of developer performance reviews:

    The most sure-fire way to ruin any metrics program is to tie specific metrics to an individual's performance review. If developers think the metrics program is simply a "big brother" attempt by management to rank developers, they will either stop reporting metrics completely or simply report metrics that make them look more favorable. The solution is to make it clear to developers that the metrics program is being instituted to better understand the development process, not to rank individual developers. Furthermore, you should control the scope of visibility of different types of metrics. For instance, certain metrics should be kept private to the individual and their manager, while other metrics should be shared with the entire development team.

McCabe Metrics

The following is a sampling of some of the McCabe metrics that can be collected by a number of automated metrics tools.

Cyclomatic Complexity

Cyclomatic complexity is a measure of the complexity of a module's decision structure. It is calculated based on the number of linearly independent paths that also equates to the minimum number of paths that should be tested . Low quality and error-prone software often display a high cyclomatic complexity.

Essential Complexity

Essential complexity is a measure of the number of unstructured constructs contained in a module. Unstructured constructs tend to decrease code quality and make it more difficult to modularize code. With unstructured constructs, making changes in one part of the code often causes errors to appear in other parts which may depend on the changed code.

Module Design Complexity

Module design complexity measures a module's decision structure as it relates to other modules. This quantifies how much effort will be required to integrate and test the module with subordinate modules. A high module design complexity leads to a high degree of control coupling between modules, making it difficult to isolate, maintain, and reuse individual software components .

Design Complexity

Design complexity measures the interaction between modules in a program. This metric provides a summary of the module design complexity and thus a good estimate of the overall integration testing effort that will be required by the program. A high design complexity implies complex interactions between modules, which leads to a difficult-to-maintain program.

Number of Lines

The number of lines in a program is one of the most basic of all coding metrics. As such, this metric by itself does not offer much value if taken in isolation. This metric is also one of the most overused and misused metrics. We have seen many organizations measure number of lines on a single project and then use this figure as a "magic number" to which future project sizing and developer performance will be measured. Of course this is the wrong approach to take. It is very easy for a developer to produce huge volumes of code that are of poor quality and otherwise score very poorly against any other metric. However, we have seen this happen in more than one case when management insisted on simply measuring the number of lines produced. Nevertheless, the number of lines, when used in conjunction with other metrics, does contribute to an understanding of a program. Generally speaking, smaller modules (with fewer lines) will be easier to understand and maintain.

Normalized Complexity

Normalized complexity is simply a module's cyclomatic complexity divided by the number of lines of code in the module. This division factors the size factor out of the cyclomatic measure and identifies modules with unusually dense decision logic. A module with dense decision logic will require more effort to maintain than modules with less dense logic.

Global Data Complexity

Global data complexity measures the complexity of the global and parameter data with a module. Global data is data that can be accessed by multiple modules in the program. The use of global data introduces external data coupling to the module that can lead to potential maintenance problems.

Pathological Complexity

Pathological complexity measures the degree to which a module contains extremely unstructured objects. This reveals questionable coding practices such as jumping into the middle of loops . Control structures such as these represent the greatest level of risk and should generally be redesigned.



Software Development. Building Reliable Systems
Software Development: Building Reliable Systems
ISBN: 0130812463
EAN: 2147483647
Year: 1998
Pages: 193
Authors: Marc Hamilton

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net