Mistakes in Analysis, Architecture, Design., Implementation, and Testing

Transitioning from a waterfall approach to an iterative approach requires changes in the working procedures of all team members . It means that analysts, developers, architects , and testers need to change their working procedure and mindset. Among the common mistakes, we find the following:

  • Creating too many use cases makes the requirements incomprehensible and is a sign of doing functional decomposition using the use-case symbol.

  • Analysis-paralysis prevents effective iterative development and is caused by getting hung up on details.

  • Including design decisions in your requirements forces design decisions to be made prematurely.

  • Not having stakeholder buy-in on requirements leads you to implement a system that is likely to be rejected or radically changed later in the project.

  • "Not invented here" mentality normally increases development and maintenance cost, as well as reduces the quality of the solution.

  • Ending Elaboration before the architecture is sufficiently stable causes excessive rework , resulting in cost overruns and lower quality.

  • Focusing on inspections instead of executable software causes an inefficient quality assurance process and creates a focus on byproducts of software development rather than on software itself.

Let's look at some of these mistakes and examine ways to avoid them.

Creating Too Many Use Cases

A common trap is breaking down use cases into snippets of functionality so small that you lose the benefits of the use-case paradigm. Typically, a 6-month project with 8 people would involve roughly 10 to 30 use cases. Decomposing the functions much further than this doesn't do you or anybody else any good.

Let's first look at what you should be trying to achieve with use cases. Each use case should do the following:

  • Describe an interaction that's meaningful to users of the system and has a measurable value to them.

  • Describe a complete interaction or flow of events between the system and the user . For instance, for an ATM system, making a withdrawal is a complete interaction; just entering your PIN, which is one of the steps in this flow, isn't.

  • Drive the design effort by clarifying how elements collaborate to address a real user need. Component modeling alone often leaves glaring holes in a design; focusing instead on achieving desired user functionality ensures that the design works and is complete.

  • Drive the testing effort by describing common ways that users want to interact with the system and ways that can be tested .

  • Serve as a management tool by defining a set of functionality that management can assign to one or more team members to work on.

When you have too many use cases, here's what happens:

  • It's hard for users to look at the use cases and state whether they provide value to them, since the approach to delivering functionality is so piecemeal.

  • Since the use cases will describe a piecemeal functionality outside the context of useful user scenarios, the design effort becomes focused on delivering an operation or two, rather than on the necessary collaboration for addressing real user needs.

  • The testing effort becomes bogged down in having to combine many test cases (derived from the use cases) to create meaningful tests.

  • People assigned to work on different use cases continually run into each other and get in each other's way, since the use cases are so closely coupled .

To avoid this, look for the following signs indicating that your use cases are broken down too far:

  • You can't measure the value that a use case provides to the user. This indicates that it represents too small a slice of a complete interaction.

  • Use case A is always followed by use cases B and C. This indicates that they're probably one use case and should be merged.

  • Two or more use cases have almost the same use-case description, only with smaller variations. Again, this indicates that they're probably one use case and should be merged.

  • There are more "includes" and " generalizations " [3] than use cases. You should in principle never have more than one level of abstraction, and definitely never more than two levels.

    [3] "Includes" and "generalizations" are relationships between use cases used to structure a use-case model. See Bittner 2003 for a discussion on when to use these relationships.

  • You can't come up with use-case descriptions (flows of events) that are more than a couple of paragraphs in length.

Having Analysis-Paralysis

The RUP is an iterative process. This means that you will have time later on to detail the requirements. Do not spend too much time detailing the requirements in the Inception phase, because you need to come up with an executable architecture quickly in order to eliminate key risks early on. And since you will baseline the executable architecture early in the next phase, Elaboration, your goal is to finish up the work in the Inception phase and move on.

The objectives of Inception are to define the scope and get an understanding of the most important requirements. You will be done with the requirements part of Inception once you have

  • Compiled a reasonably complete list of the expected actors and use cases. It is fine to make smaller adjustments to the set of use cases later in the project, but typically not major changes. For example, if you have identified 20 use cases in Inception, in Elaboration you might add 1 and remove another.

  • Detailed the essential or critical use cases ” roughly 20 to 30 percent of all the use cases ”so that you have a fairly solid idea of those use cases, ideally with accompanying screen prototypes or something similar. Continuing with the example, between 4 and 7 of the 20 use cases should have two-page descriptions; the rest need only a couple of paragraphs.

It should also be noted that small, co-located teams might choose to do less detailed documentation of use cases to save time.

Including Design Decisions in Your Requirements

Especially when writing requirements in the form of use cases, it is common that analysts include design decisions, such as the layout of the GUI or the implementation of various algorithms. This may cause bad design decisions to be locked in, as well as to defocus users and analysts from the objective of capturing and agreeing on requirements.

The following is an example of a requirement that includes design decisions:

The system searches through the database using start time and end time as search keys to verify which conference rooms are available at the indicated time. Available conference rooms are listed in green, and unavailable conference rooms are listed in red.

Instead, you should remove design decisions such as how things are presented and what search algorithms will be used:

The system verifies which conference rooms are available during the indicated time. The system presents a list of all conference rooms and where available, conference rooms are graphically differentiated, making it easy to detect which rooms are available and which are not.

Not Having Stakeholder Buy-In on Requirements

As you detail the vision, use cases, and nonfunctional requirements, you need to get buy-in and involvement from the whole project team, including customers, developers, and testers, to ensure that these are the right requirements. Failure to do so may mean that you are making investments toward requirements that will need to be radically changed, causing unnecessary rework. This does not mean that you should develop all requirements up-front or that all communication should be through requirements specifications, but you need to establish a common understanding of what system is being developed.

"Not Invented Here" Mentality

Architectures, or parts of architectures, are often reusable. Integrated Development Environments ( IDEs ) contain architectural mechanisms, patterns are available through the Web and through books, [4] you or other people in your company may have built similar types of systems before and know what works and what doesn't, and you may find Commercial Off-the-Shelf (COTS) components or package software that fits your needs. Whenever possible, you should strive to reuse solutions that work ”whether they are components , patterns, processes, test plans, or other artifacts.

[4] See Gamma 1995.

Unfortunately, some developers are extremely skeptical about using other, potentially suboptimal, solutions and prefer to develop a solution from scratch themselves . What is important to remember is that "Perfect is the enemy of good" (as we said earlier in the section Allowing Too Many Changes Late in the Project). You may be able to build a better solution yourself, but at what price?

Especially as an architect, you need to make sure that architectural patterns that have been developed during the Elaboration phase are not reinvented by various project members during later phases. This can be achieved by having training and design reviews. It is important that you properly communicate the availability of patterns and architectural mechanisms to all project members.

When considering the reuse of third-party components, these are a few considerations to make:

  • You need to understand what requirements the components must meet. Then you need to see if the reusable component(s) fit those requirements or can be made to fit them. If not, you need to do a tradeoff : What is the cost saving versus how much you need to compromise on the requirements?

  • You need to assess the quality of the reusable component. Is it properly documented and tested? If not, cost may become higher than writing something from scratch.

  • Will a third party maintain the component, or do you need to maintain it?

  • You need to look into legal rights and potential royalties. Is it economically feasible to use a third-party component? Do you still fully own the final product? What are your liabilities?

As you can see, it is not always obvious that you should reuse existing components, but the upside with reuse is in general much greater than the downside. Also be aware that projects often underestimate the complexity in doing something themselves, and feel they can do everything better and faster themselves. Many years ago, we attended a conference where the speaker asked one half of the room to write down how long they thought it would take them to solve a certain problem. The other half of the room was asked to write down how long they thought it would take the person next to them to solve that same problem. Amazingly enough, the average person thought he or she could solve the problem in almost half the time as the other person.

Ending Elaboration Before the Architecture Is Sufficiently Stable

There are several benefits for baselining the executable architecture by the end of Elaboration, including

  • Allowing you to mitigate technical risks.

  • Providing you with a stable foundation upon which to build the system.

  • Enabling effective reuse of large-scale components.

  • Facilitating the introduction of more developers and more junior developers, since many of the most difficult design issues have been resolved and developers can work within well-defined areas.

  • Allowing you to accurately estimate how long it will take to complete the project.

  • Allowing much more parallel work.

By rushing ahead and initiating Construction before you have a designed, implemented, and tested architecture, you will lose out on some or all of the benefits, ultimately causing excessive rework, resulting in cost overruns and lower quality. You need to decide whether to add another iteration in Elaboration, and hence accept a delay in the project, or whether to move into Construction and build your application on a brittle architecture. The following provides you some guidance on when to choose which approach:

  • If you add an iteration to Elaboration, you are less likely to run into major rework due to architectural issues later in the project. The delay introduced by adding an iteration can be recuperated by, for example, cutting scope or quality levels. As an alternative, you can accept that the delay in getting the architecture right will cause a delay in the delivery of the final product. Large projects, architecturally complex systems, unprecedented systems, or systems with a lot of new technology should consider choosing this alternative.

  • If you choose to move into Construction in spite of the architecture being unstable, you risk having to rework the architecture at a later stage, causing delays to the project that could be substantial. You may also lose out on other of the mentioned benefits, which have a baselined architecture. Smaller projects, or projects with limited architecture, familiar technology, or familiar domains, should consider choosing this alternative.

It should be noted that the more unstable the architecture is, the more risky the second approach is. It should also be noted that the Extreme Programming (XP) process in general favors the latter approach, which also explains why XP is more suitable for smaller to medium- sized projects with limited architectural risks. XP assumes that refactoring will allow you to evolve to a good architecture without causing major or unnecessary rework. We believe these are risky assumptions to make. This issue is probably one of the larger points of disagreement between the RUP and XP.

But how do you know that you are sufficiently done with the architecture? There will still be a need for doing some rework of the architecture, but the rework should be in the ballpark of 5 to 15 percent of the architecture (measured, for example, in changes to defined interfaces of components and subsystems). The best indicator for whether you are done is to look at the rate of change of code and interfaces. Toward the end of Elaboration, the rate of change in interfaces (operations defined for key components and subsystems) should diminish. By extrapolating the curve in Figure 13.6, you can assess when you reached the stage where future changes will be less than 15 percent.

Figure 13.6. Rate of Change in Interfaces Indicates When Elaboration Can Be Ended. In Chart A we see a clear diminishing and consistent trend in the rate of change of interfaces, making us comfortable that the architecture is stabilizing. In Chart B, we do not see such a clear trend, alerting us that we are not moving toward a stable architecture. Elaboration cannot be completed.

graphics/13fig06.gif

When doing iterative development, you will find the rate of change in code, requirements, designs, and so on to be an excellent measure of completion. Ask yourself: Are requirements and architecture stable toward the end of Elaboration? Is the code stable toward the end of Construction?

Focusing on Inspections Instead of Assessing Executable Software

A strong focus on inspections is a sign of a waterfall mentality, where the majority of quality assessment activities focuses on byproducts of software development, such as plans, requirements, and designs, rather than on the primary products (software and its quality).

The old school of quality assurance compares a waterfall approach with no inspections to a waterfall approach using inspections and finds that the latter produces code of considerably higher quality. This is a correct observation but fails to recognize the fundamental problem: the usage of the waterfall development approach itself.

Rather, you should compare a waterfall approach using inspections with an iterative approach focusing on continuous integration and testing. By using automated testing, such as runtime analysis (memory leak testing and application performance testing), developers can discover defects before they consider development to be complete. Automated testing technology and the use of continuous integration and testing typically allows you to identify and correct many defects at a cheaper price than can be done through inspections.

This does not mean that inspections should not be used when you adopt an iterative approach; they are still useful in many situations. But they should focus on the right things, such as whether requirements are agreed on among stakeholders, whether the design guidelines have been followed (are architectural mechanisms properly used?), or whether there are opportunities for reuse. But most classical design inspections can either be automated by tools or can find defects that could have been found by a less-expensive method by developers and testers using proper automation.

At the end of the day, what really counts is how good your code is, not how good your byproducts of software development are.



The Rational Unified Process Made Easy(c) A Practitioner's Guide to Rational Unified Process
Programming Microsoft Visual C++
ISBN: N/A
EAN: 2147483647
Year: 2005
Pages: 173

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net