3.1 WHAT IS MANAGEMENT INFORMATION?

 < Day Day Up > 



3.1 WHAT IS "MANAGEMENT INFORMATION?"

Before we look at these aspects of management information I would like to discuss some important background issues. Management Information and the systems developed to supply that information, whether they are being developed as part of a Software Metrics initiative or not, are often thought of as single entities that service the in-house needs of a relatively small group of individuals, i.e., the management. This can be a very dangerous viewpoint because MIS applications are often large and complex in two dimensions. First, they service, or should service, many different levels of customer from the managing director or CEO to the team leader. Second, they have to satisfy a large number of different requirements within each customer set but still recognize the similarities that cross the set boundaries if they are to avoid massive data redundancy. Just to make life interesting, they may also have to deal with ad-hoc requirements.

Now this may all sound as though it has little to do with a Software Metrics program but let me stress that such a program is a Management Information System. Whether it operates through fancy computer systems or good old pen and paper, it still has to deal with all of the complexities outlined above.

The first dimension tends to be a function of your own organizational structure and it will be for you and your own management group to determine the levels at which your information system should operate. The second dimension has to do with the different requirements placed on that system by its users or customers. This implies that the use to which the information will be put is considered. There are many different, specific requirements that will be placed on the system but they will all have something in common, namely they should satisfy a need for information. Consequently, the aspect of a Software Metrics program that we are talking about has to do with the provision of information. Output from the system should be related to a requirement for information and should not be generated in any random way. Consider that requirements for information generally, if not always, should relate to an attribute of either a product or a process.

To make this clear we can consider a couple of examples. A manager of a number of product teams may wish to know how those teams are performing and how that performance is changing over time. This is a basic information requirement if the teams are to be managed effectively. This requirement relates directly to the process attribute we call productivity. In other words, what do I get out for what I put in?

On the other hand, a product manager may wish to know how reliable his or her software system is during the first three months of live use across a number of releases.

This information can be used as a direct quality measure of the product or as an indirect quality measure of the process that produces the product but however it is used, it relates to the product attribute of reliability.

To manage an attribute means realizing that there are two information requirements associated with each of them.

For any attribute, a requirement exists that the attribute be monitored. For example, if a manager feels that group productivity is important to him then it needs to be measured and periodic reports fed back to the manager so that the information can be used. This is, traditionally, the type of requirement that Management Information Systems seek to satisfy.

A second requirement for any attribute is that it be predicted. This can be much more difficult. For example, given a new product group within a division, how can we be confident that we can predict the productivity level for that group? But unless we can do this we may find our business in danger, especially if we are operating in a fixed price environment.

Another example can be drawn from the area of quality. One of the biggest changes in the software industry over the last few years has been the rapid growth in maturity of our end users. They are much less willing now to accept loose specifications of quality and, to be honest, it is not only the fixed price contractors who are having to cope with the need to predict values for various attributes. For example, one of the largest organizations in the UK, an organization with a massive IT procurement budget, appear to be moving towards contractual specifications of reliability for software systems.

This means that we, as software developers have to be much more capable than we have been in the past when it comes to being able to predict value levels for attributes.

How we do this needs to be given a great deal of thought. Should we attempt to use theoretical models of, say, reliability growth? The problem is that while various reliability growth models may work for certain environments it does not seem to be possible to identify the environment attributes that would enable a match to be made to a specific model with any degree of certainty. In my opinion, the only alternative is to use empirical models developed from data collected in-house. This would imply that our Management Information System should collect the data that would enable the derivation of such a model. Obviously, it is difficult to specify the requirements for data collection if you do not have the model defined which will use the data. However, a pragmatic and educated guess as to what will be required is often a valid starting point.

From these two examples you may appreciate some of the problems that arise when you move into the area of predicting process or product attributes and it would seem that the provision of management information is not quite as simple as we may first think. To summarize, requirements for information can be many, varied and complex. Attributes of both the process and the products of the process would seem to be fundamental to this aspect of Software Metrics. Attributes, once identified and defined, generate requirements of two types, the need to monitor and the need to predict values for the attribute.

Before continuing and having mentioned reliability, I would like to digress slightly. As far as attribute definitions are concerned, you might like to watch out for some of the definitions of reliability that are still floating around.

These generally start by defining reliability as "the probability that a software system will..." This is not a definition of reliability because anything that is expressed in terms of a probability is in fact a prediction. The probability that a ten year old male child will grow to a height of six feet does not define the attribute height. This may seem a trivial point but things like this do show the need for rigor in Software Metrics, as such a definition can cost you time and money. Imagine going to senior management with a reliability measurement proposal only to have it kicked out because the basic definition of reliability was seen to be flawed. If you think such a thing could never happen then let me assure you that I have seen it happen.



 < Day Day Up > 



Software Metrics. Best Practices for Successful It Management
Software Metrics: Best Practices for Successful IT Management
ISBN: 1931332266
EAN: 2147483647
Year: 2003
Pages: 151
Authors: Paul Goodman

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net