15.1 SEI ASSESSMENT

 < Day Day Up > 



15.1 SEI ASSESSMENT

We should realize that both these techniques are essentially the same, only the emphasis being different, so I will briefly describe the process behind an SEI Assessment.

The first step is a management commitment to the assessment and, most importantly, to the improvement plan that results from the assessment. Next, a small team from the organization is trained by the SEI or one of its accredited trainers. Notice that we are essentially talking about a self-assessment here, but one done to a recognized and accepted formula.

Next, plans are prepared to ensure the assessment can be rapidly completed, typically within one or two weeks, and then projects or areas are selected for that assessment. The selected project or area teams then fill out a questionnaire of, currently 101 yes/no questions.

The assessment team then corroborate the questionnaire data and delve much deeper through a series of intensive interviews and group discussions. Note that the questionnaire is not the whole assessment. These interviews actually form the key to the SEI assessment technique. Penultimately, results are presented to management together with a set of recommendations for improvement.

The last step is for the organization to prepare and implement action plans to realize those recommendations and that is seen as part of the assessment process, not separate from it. Now this is the key. The main deliverable of the assessment is the improvement plan. The great value of the model is that, once you have evaluated where you are within the parameters of the model, the model itself indicates those areas or key processes that you should next address.

But what about the Maturity Model? This usually forms part of the management sign up discussions and a score against it also results from the assessment. It is simply a five-level representation of the maturity of a process where these levels range from initial or ad hoc, through repeatable, defined, managed and optimizing. See Figure 15.1.

LEVEL

PROCESS CHARACTERISTICS

Optimizing

Process improvement is institutionalized

Managed

Product and Process are quantitatively controlled

Defined

Software engineering and management processes defined and integrated

Repeatable

Project management system in place, performance is repeatable

Initial

Process is informal and unpredictable


Figure 15.1: The CMM Process Maturity Framework

It is this Maturity Model that most people have picked up on although some have also latched on to the questionnaire. Most do not go for a full understanding of the SEI assessment technique which, to judge by the experiences of part of the Hughes Aircraft organization, is a shame. Having bought into the SEI assessment mechanism and followed through on recommendations this group moved from level 2 to 3. It took them 2 years to do it and cost about $450 000. Why bother?

Well, they estimated savings of $2 million PER ANNUM! Even if they have overestimated I do not believe that they have done so by much.

How they did it is described in the IEEE Software Engineering Transactions issued July 1991, Humphrey (1) . This paper, together with some of the discussion papers that follow it both supporting and criticizing the assessment process make interesting reading.

We all know that there are few perfect solutions so what are the problems with the SEI assessment process and the associated Software Capability Evaluation? We should also ask why the Maturity Model has generated so much interest and how it can be used.

The first problem is an interesting one. How would you feel if your customers insisted on your development process being assessed? Obviously there has been, and continues to be, some resistance to the idea of having to suffer a Software Capability Evaluation. However, when the organization insisting on that evaluation is the US Department of Defense and you want a piece of their expenditure to come in your direction you may well feel it judicious to swallow your pride. I feel that the reality is that the industry is moving more to a situation where customers will demand that the capability of their suppliers be expressed in recognizable forms, and the SEI approach is certainly a step in this direction. What we need to be very careful about is that we do not allow ill-conceived and possibly harmful initiatives to be foisted on the industry. This is more probable if suppliers of software seek to resist these initiatives. We stand a much greater chance of getting workable and helpful assessment mechanisms if suppliers and customers, professional developers and academics cooperate.

It must also be said that the SEI two-pronged approach that, essentially, offers the same techniques for supplier evaluation and self assessment, encourages such cooperation and has contributed to the adoption of their mechanisms.

A more fundamental concern, and it is no more than a concern on my part, lies with the success of the SEI approach and the maturity model, not with the approach and the model themselves. The reason for this concern is that more and more publicity is being given to the maturity model and its five levels but little understanding exists of the SEI assessment technique as a whole, and that whole is important if the maturity model is to be understood in context.

We have had this problem with the Goal, Question, Metric paradigm and I must admit I do not see an easy solution. The fundamental problem is that the maturity model is a very convenient mechanism that can be used to describe the capability of an organization. Like all models you lose a massive amount of detail when you do this but people can relate to that model, especially more senior managers.

Things that work, and in this context the maturity model does work, tend to get used. We, as professionals, must remember that much more lies behind the maturity model and we must use it correctly. The SEI have attempted to address this by accrediting assessors but this is a difficult process to complete, especially for non-US based professionals. Hopefully, we will see a wider form of accreditation in the future.

The SEI have also developed additional, supporting models for other aspects of IT management including the People-CMM focused on human resource management and development, the Personal Software Process focused on the development of individual software engineers and still more.

The SEI have now drawn together and updated a number of the models they have developed and have released the Capability Maturity Model Integrated for Systems Engineering, Software Engineering, and Integrated Product and Process Development or CMMI-SE/SW/IPPD for short, (which we will call the CMM-I). Measurement is a Key Process Area within the CMM-I, progress indeed.

Other topics that seem to be generating interest today in the area of metrication are measures for enhanceability and user satisfaction. Almost all of us are living with systems that we expected to replace years ago, but still they go on. Not only do they still exist but they are also continually enhanced. I do not believe that anyone has got to a system in the same state as the axe that had three new heads and four new shafts and was still as good as the day it was bought, but who knows? If as we now realize is possible, systems can have vastly extended lifespans, then being able to control and manage the enhanceability of that system is important. Elsewhere in this book a simple model of an enhanceability measure is proposed but this topic is one that merits more discussion within our community.

As far as user satisfaction is concerned it should come as no surprise that this is seen as important. Satisfied users provide repeat business, they tell other potential customers and can generate new business, and a satisfied customer is much less likely to cause us pain on a daily basis. But how do you know if your customer is satisfied? The obvious answer is to ask them, but ask them in a way that enables an objective answer to be obtained rather than by means of a quick phone call.

There are two points to bear in mind about user satisfaction assessment. First, your business environment will impact heavily on how you address this problem. For instance, if you are cooperating closely with a client on a large project you can build user satisfaction evaluations into your project control mechanisms so that the user has a chance to express any concerns they have. Remember you must drive this because many problem are not large enough to make the user or customer actively complain, but it only takes a few of these 'little' problems to leave a nasty taste for the user. Alternatively, you may be maintaining a system for a user. In this case an annual meeting with a structured format can work in assessing user/customer satisfaction. Finally, you may be selling a product to many customers. In this situation you may not be able to ask all your customers how they feel, but you can sample.

The second point to remember is that user satisfaction does not depend solely on satisfying their expressed requirements. A classic example of this could be the organization that has a service level agreement (SLA) with a client that requires an engineers on site within one hour of a problem being reported and that 99% of all problems be resolved within one day. If a supplier organization achieved this tight constraint they would be surprised if the customer cancelled the contract. What if the engineers always arrived within an hour but were surly and condescending when on the client site? What if the client had trouble reporting faults because the idiots on the switchboard always routed the call to the wrong department? What if every time a fault was fixed it seemed to trigger ten more? The only way you will become aware of these problems is when the client walks away from you, unless you regularly make sure that the client is satisfied.

The most interesting trend from my point of view is a move away from measurement per se. This, I feel, is a real indication that we are maturing as an industry. More and more organizations are realizing that measurement for its own sake will do little to solve their problems. Only a quality management, and notice I say management, not improvement, approach towards process optimization will do this. Of course, measurement is a cornerstone of this approach. This shift of emphasis opens up a whole new topic and this, I believe, will be one of the most interesting areas to be involved in over the next few years. The leaders, of course, will be the ones who get on and do it. They will reap the benefits.

Even with this trend towards the implementation of true process improvement there will still be areas of particular interest to the Software Metrics community and I would like to consider some of those now. We can take this opportunity to look at some techniques that are perhaps less well known than they should be and to consider their possible impact on the area we call Software Metrics.

The first area I would like to consider is that of formalization within Software Metrics. I remember teaching an evening class of students some years ago, the idea being to introduce this group of students, that ranged from storemen to army officers, to the intricacies of computers. My starting point was to explain that most individuals are, in effect, lazy and that the use of the term computer was actually incorrect. What we should talk about is the computer system. This gave me a good lead in to an explanation of operating systems, programs, hardware and how these various elements hang together to make up something that can do useful work.

The point of this example is to illustrate our misuse of words. This misuse continues with the use of the term "Software Metrics." Technically speaking a metric is a unit of measure such as the amp, a nautical mile or a parsec. In software terms an example of a metric would be a line of code, provided it were clearly defined, or the McCabe metric known as cyclomatic complexity. And that is it: a metric is no more and no less than that unit of measure and there is a discipline concerned with the definition and use of such things. That discipline is not what we know today as Software Metrics.

Now you may ask what is in a name? 1 must admit that I have some sympathy for this view believing that if you want to call our area of interest Software Metrics then, provided it is seeking to promote best practice, you call it Software Metrics. You can call it statistics if you want! However, I do recognize that laziness in the use of terms can be symptomatic of a deeper malaise. There is a tendency within Software Metrics, and I will continue to use that term for this book to avoid any more confusion, to tend too heavily towards pragmatism.

Now bearing in mind one of the main themes of this book, that pragmatic solutions are often perfectly acceptable, that last statement may come as a surprise. Pragmatism is all very well but we must always remember that there are certain rules that you do not buck. For example, there is a great tendency to add simple measures together to produce a more "complete" indicator of some characteristic. For example, let us say that we recognize that the simple line-of-code measure does not totally reflect the effort needed to make an enhancement to a section of code. Furthermore, assume that we consider the McCabe Cyclomatic complexity measure a valid measure of complexity. So the model says that by combining LOC with Cyclomatic complexity we will have a better measure of enhancement effort, so add them.

The two basic measures use different scales and cannot be added in this way; it is simply against the rules of mathematics.

I once had the great pleasure of attending a "mathematical nightclub." This was an event staged by a group of tutors for the benefit of newcomers to the world of mathematics. It was here that I saw it "proved" that 2 equals 3. I also learned how to make a reasonable amount of money at any party but that's another story. 2 equals 3 if you miss the sleight of hand that makes you skim over the line in the proof that says divide by zero! Equally, excuse the pun, if you 'add' LOC and Cyclomatic complexity, then you are performing a sleight of hand because you cannot add quantities of different units together. The reason I have taken so long over this story is that this addition of Cyclomatic complexity and LOC is one that I come across about every twelve months! The point is that you can only get away with this type of thing if you do not apply any formalism in your definition and use of Software Metrics.

The challenge facing the software engineering sector of industry is to adapt to this greater degree of formality in a positive way. One concern that is sometimes expressed is that standards, such as those defined by the International Standards Organization (ISO, www.iso.org) may not be workable within particular organizations or areas but it should be noted that the current trend, at least as far as Software Metrics is concerned, is towards guideline standards rather than prescriptive mechanisms. Having said that, the realities of business and particularly the demands of customers may force us to accept standardization more readily than we might otherwise.



 < Day Day Up > 



Software Metrics. Best Practices for Successful It Management
Software Metrics: Best Practices for Successful IT Management
ISBN: 1931332266
EAN: 2147483647
Year: 2003
Pages: 151
Authors: Paul Goodman

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net