3.4 REQUIREMENTS FOR INFORMATION

 < Day Day Up > 



3.4 REQUIREMENTS FOR INFORMATION

Before I go any further I do need to talk about models and measures. We are going to be discussing requirements during the next few paragraphs and it is very possible that some "solutions" to those requirements may creep into the text. Now some people will immediately get annoyed about this claiming that "off-the-shelf solutions" never work. Let me make my case very clearly. I suggest that business objectives across organizations are very similar. At their most basic these objectives reduce to the objective, i.e., to maximize output while minimizing cost and waste. I suggest that the problems faced by organizations (especially in the IT industry), are also effectively the same. Consider cost estimation and project control, for example.

The question is, are the solutions to these problems also the same, or at least similar across organizations?

Solutions tend to be detailed and are often specific to specific organizations.

I believe however that what are truly portable in many cases are solution models. Metrics or measures, as will be discussed in more detail in Section 2, Building and Implementing a Software Metrics Program are derived from models and will probably be different between different organizations. If I offer a solution to any requirement talked about in this chapter it is a model. If you decide to use any of these you still have some work to do to in order to turn the model into a metric. An approach to this is also described in section 2, when I describe the Goal/Question/Metric paradigm of Basili and Rombach, [Rombach (1) ].

To illustrate a requirements-based approach to management information let us look at some of the typical requirements managers have for information. One of the most common requirements for management information is to have productivity figures available. Productivity measures are seen, in many industries other than IT, as performance indicators. You often see export performance expressed in terms of sales over units of time, the implication being that the cost over time is constant. The car industry uses cars produced per work day or month as a measure of factory performance.

Productivity can be defined as the work product divided by the cost to produce that product and in the IT industry productivity is usually expressed in terms of Lines of Code or Function Points produced, divided by effort, in terms of engineering days, months or hours (big hint: person hours is by far the best to use).

These types of measures do not tell us everything we wish to know about performance but they do give us a high level indicator of performance provided you remember one very important fact. Absolute productivity values, in any terms, do not provide much in the way of useful information. To get any real value out of productivity measures you need to consider the trend over time and what you want to see, of course, is an improvement over time.

The other thing to remember is that productivity should not be considered in terms of good or bad, only higher or lower values. For example, I would expect a small, in-house development team who work within very loose documentation constraints to achieve much higher productivity values than a team developing safety critical applications in, say, the defense sector of the industry where documentation control is extremely stringent. This does not mean that the small, in-house team is performing better or that the defense application team is performing worse. What I would want to see, over time, is both teams improving their productivity rates.

This does mean that you need to be very careful when you compare productivity across teams or organizations. Not only can the simple productivity measures lead to erroneous comparisons but you also need to be very aware of differences in terms of the definitions used for the base data elements. There are, for instance, many perfectly justifiable ways of defining a Line of Code. There are also many variants of Function Point Analysis around. As if this wasn't bad enough it is often instructive to ask how the cost element of the productivity function is defined. My idea of an engineering day can be very different than yours. Now, it may be that someday we will have recognized standards in these areas but they do not exist today. This means that such comparisons can give totally false impressions. This is why it is much better to use person hours rather than days.

Having said this, such comparisons can be useful if used sensibly, with great care and always in the presence of a large salt cellar from which to take the occasional pinch.

Finally, never measure productivity at the personnel level. This has been said before but I make no excuse for repeating myself. I know that it can be tempting to do this but individual differences and circumstances will probably swamp any meaningful interpretation of the results.

For example, I once knew a programmer analyst who was superb at her job. Her productivity rate over anything other than a carefully selected short period of time was abysmal for the simple fact that she was used to train and assist the less able team members. This was actually an excellent use of this lady as a resource because she was not only excellent at her official job, she was also an excellent teacher. Now you may feel that you would always be sure to take that kind of circumstance into account but can you be sure that everyone else would all of the time? Anyway, measuring at the individual level, and even worse using that data as part of assessment activities, could lead to demotivated staff, lousy team spirit and an abdication of responsibility by management. A decent team leader should never need that kind of individual measure to assess performance.

Productivity measures have their place. Use them wisely and you can get hold of a great deal of useful information that enables you to assess process performance, provided you avoid the traps!

Certain other requirements, like productivity, seem to crop up time and again. Getting a handle on the effectiveness of cost estimation within the organization is an example of such a requirement. This is a relatively simple requirement to satisfy in that you really only need to compare the accuracy of estimates to the actual results. Few things in Software Metrics are that simple, and this is no exception. Very often such a request hides the real requirement which is for the implementation of a cost estimation strategy—but more about that later.

Another basic requirement for management information is in the area of quality assessment. This can really be fun!

If you ever want to waste half an hour or so, get a group of IT professionals together and ask them to define what is meant by "quality." With all of our experience, this still causes difficulty. In the "old days" we used to have the role of Quality Managers in many organizations. Asking them to define quality really was fun. It wasted lots of time but it was fun. If you try this, after the half hour is over you will still not have a definition of "quality" but you will have seen the whole range of human emotion expressed by normally reasonable men and women. I leave you to draw your own conclusions!

Of course, the problem is not with the definition of quality, it is with the application of that definition in any sensible way.

I am quite happy to accept the common definition that quality is the satisfaction of user or customer requirements. I fully accept that a small hatchback can be a quality car and that this accolade is not reserved for the best of the super cars. I agree that a quality service is one that meets, or exceeds, where cost effective, customer expectations. Does this help me measure quality? Well, it is a start, but that is all. I believe that one must go further than this and that the concept of "quality" must be further subdivided into a number of quality attributes.

Identified quality attributes apply to specific applications and to specific deliverables from the development process such as designs, to a greater or lesser extent. This is well worth remembering as you will see.

Typical quality attributes for IT products include, and I apologize in advance for the number of words that end in "-ity" but that seems to be the nature of the beast: reliability, maintainability, testability, usability, portability, etc. The list can go on and on and on and on...

Defining product or process attributes and then relating these to the quality requirements for the process or product is a very effective start when you need to measure "quality."

Again there are certain points that should be borne in mind. Not all attributes apply to all applications. For an embedded application required for a single mission, such as a space satellite, portability may not be a concern. The level of quality required by specific applications may vary according to the attribute being considered and the application type. For instance, the reliability required in a piece of games software is likely to be lower than that for a life support system. Interestingly, modern games software is extremely reliable.

Defining quality in terms of specific attributes may seem a difficult task but you would be pleasantly surprised at how clear most managers are in their requirements when asked to talk about the kinds of information that they would find beneficial. If you would like a good starting point I would suggest that you look at a document, item IS9126 in the reference list, that is available from the International Standards Organization, ISO (www.iso.ch). ISO and the International Electrotechnical Commission, the IEC (www.iec.ch), have been working on standards for quality attributes for some time now and this document is the result of their work. Their definitions of the various quality attributes that they have deemed to be important are certainly a valid starting point although you may have to do some tailoring to suit your own organizations needs.

My only major disagreement with the ISO attribute definitions is in the area of Maintainability. Within the IT industry there is a great tendency to use maintenance, and hence maintainability, as a catchall for corrective, adaptive and perfective maintenance as defined by Swanson (1) many years ago.

While this made sense at the time when applications were somewhat simpler it does seem to cause us problems now. Based on research carried out by the Inland Revenue in the United Kingdom and supported by discussions with many IT managers, most IT functions seem to devote about 60% to 70% of their overall effort to maintenance according to this definition but most of this effort is expended on enhancing existing systems in line with new or additional user requirements. This is a direct result of the IT industry adopting, almost universally, a strategy of sequential builds or releases against a generic product or system. The production of these new builds or releases involves all the stages of most of the standard lifecycle models used for development of new systems with, of course, the added complication of integrating new and changed functionality with the core system.

This being the case it would seem to make sense to treat enhancements to a system separately from "bug fixing" or corrective maintenance and I feel that maintainability and enhanceability should both be considered as top-level quality attributes in terms of management information requirements. Of course maintainability is now defined in terms of corrective maintenance. Perfective maintenance should be included under enhanceability as it is generally carried out as part of a release development project rather than being part of any patching work.



 < Day Day Up > 



Software Metrics. Best Practices for Successful It Management
Software Metrics: Best Practices for Successful IT Management
ISBN: 1931332266
EAN: 2147483647
Year: 2003
Pages: 151
Authors: Paul Goodman

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net