Introducing the Rational Unified Process


In introducing the RUP, it is best to begin with a historical overview before getting into the specific practices that make up the process.

History

The setting for development of the RUP began in the early 1980s at Rational Software Corporation. Founded by Paul Levy and Mike Devlin, Rational was dedicated to the successful development of large, complex software systems. At the time of its founding, large, complex software projects were mostly in the domain of government systems, particularly at the Department of Defense. Rational specialized in providing proprietary hardware platforms and environments for Ada, the preferred language of the Department of Defense at that time.

In the late 1980s, several trends in the marketplace converged that led Rational to rethink its strategy. First, proprietary hardware platforms were giving way to open systems. This made Rational's premier product offering, the R1000 Ada software development platform, obsolete and too expensive in the emerging marketplace for open systems. In addition, the newer open-system platforms (such as those being offered by Sun Microsystems) were becoming increasingly powerful. Even personal computers were rapidly becoming more and more powerful. This made it practical for Rational to port its Ada Software Development System to these platforms.

The second trend was the slow death of the Ada programming language. Although intended to support modern software engineering principles, Ada was perceived as a bloated, difficultto-use language designed by committee. This perception was especially true in the rapidly growing commercial market, which favored languages such as C++. An update to the original 1983 Ada standard was continually delayed until 1995, at which point it was viewed as too little, too late. Ada is now a very small player in the Department of Defense marketplace.

The final trend that helped shape Rational's strategy was the explosive growth of microprocessors in consumer goods, such as automobiles and appliances. This led to software projects of increasing complexity and size by commercial companies that had little experience with developing software.

The advent of the World Wide Web led thousands of companies to develop a business presence on the Internet. It also led them to develop Web-based systems that placed back-office functionality into the hands of its customers through Web applications.

These trends culminated in a tremendous opportunity for Rational Software. Rational recognized and embraced these trends. Interestingly, Rational's mission did not change. The mission was still to ensure the success of customers developing large, complex software systems, but the tactics and product offerings for supporting this mission changed entirely.

Lacking sufficient product offerings to support this emerging market, Rational went on an acquisition binge. Pure-Atria was acquired, providing the ClearCase configuration management tools, as well as the Purify line of testing tools. Requisite was acquired for its Requisite Pro requirements management tool. SQA was acquired for its suite of testing tools. Rational already had Rational Rose, its analysis and design tool; ClearQuest, a change-request tracking system; and SoDA, an automated documentation generation tool. A number of other acquisitions occurred, but those mentioned here were the most important. Together with its existing product offerings, Rational now had a complete set of products that supported the entire software development lifecycle.

Almost simultaneously, two other major efforts took place at Rational Software. The first was an effort to create a standardized methodology for modeling software systems. In the early 1990s, dozens of modeling languages were in use, including Booch, Buhr, Object Modeling Technique (OMT), and Shlaer-Mellor. The marketplace was fragmented, which made it difficult to develop a single tool to support the majority of software development efforts. Rational's answer to this problem was to use Grady Booch (who invented the Booch Methodology and who was already a Rational employee) and to hire James Rumbaugh (OMT) and Ivar Jacobson (the Objectory Process). Together, these three (known as "the Three Amigos") began work that would culminate in the development of a single modeling language (appropriately named the Unified Modeling Language [UML]) that would replace the plethora of languages currently in use. Rational also acquired Jacobson's company, Objectory. The RUP drew much from the Objectory Process, particularly the notion of use cases for describing how people interact with systems.

The second major effort Rational embarked upon was to develop a documented set of best practices for software development that could be supported by the tools in Rational's arsenal. This, of course, is what led to the creation of the RUP.

IBM acquired Rational Software in early 2003. The RUP continues to evolve and be updated as industry practices change. The RUP, therefore, will continue to be at the forefront of software development methodologies and best practices.

The Six Best Practices

When the RUP was developed, it centered on the application of six best practices. From the initial version of the RUP through most of 2005, these best practices were as follows:

  1. Develop iteratively.

  2. Manage requirements.

  3. Use component architectures.

  4. Model visually.

  5. Continually verify quality.

  6. Manage changes.

These six best practices were developed from Rational's experience in helping develop large, complex software systems. They were also designed to help drive the use of tools offered in Rational's product line. The designers of the RUP continue to evolve the process as methods and practices mature through their application. In October 2005, an article appeared in the IBM Rational e-zine The Rational Edge. In it, Per Kroll and Walker Royce updated the six best practices, as follows:

  1. Adapt the process.

  2. Balance competing stakeholder priorities.

  3. Collaborate across teams.

  4. Demonstrate value iteratively.

  5. Elevate the level of abstraction.

  6. Focus continually on quality.[1]

    [1] From "Key Principles for Business-Driven Development," in the October 2005 issue of The Rational Edge, by Per Kroll and Walker Royce. The Rational Edge is owned and operated by IBM.

Let's take a closer look at each of these best practices.

Practice 1: Adapt the Process

Every project is different. Large projects with many people and teams who are geographically scattered require more formality and control than do small projects with few people. Furthermore, within each project, the level of control may vary depending on the amount of "invention" and novelty required. It is difficult to be creative when lots of heavy, formal controls must be followed. On the other hand, a project involving maintenance or enhancement of mission-critical software where failures can cause loss of life requires a much higher level of formality. The point is to assess the situation and adjust the process to fit it. The amount of control needed also varies throughout the project's duration. For example, during initial development, developers need the freedom to quickly try new approaches without overhead controls getting in the way. However, after the ideas solidify and releases are delivered to the user community, changes to these baselines need to be controlled. The following is a list of characteristics illustrating where more control and formality are needed. Evaluate your project against each criterion. Projects meeting all or most of the criteria require more formal control and process than projects meeting fewer (or perhaps none) of the criteria.

  • The project team is geographically scattered. This includes the outsourcing organization and the team developing the product.

  • The user community is large and geographically scattered.

  • The project team is composed of multiple contractors, each building a different portion of the product.

  • The product must meet stringent standards that must be verified.

  • The product to be developed is technically complex.

  • The project is in the later part of the project lifecycle. In other words, earlier lifecycle phases require less formality, and the later phases require more.

The process should also be adapted across projects. As a contracting organization becomes more experienced with the process, it should incorporate the lessons learned into its corporate memory and apply them to subsequent projects. This topic is covered in Chapter 15, "The Project Postmortem."

Practice 2: Balance Competing Stakeholder Priorities

For many software teams, it has become an ingrained habit: First, gather all the detailed requirements, and then develop to those requirements. But this pattern ignores possible opportunities to satisfy stakeholder needs in simpler and safer ways. One way to reduce the inherent risk involved in all custom software development is to avoid custom development wherever possible through the use of Commercial Off-The-Shelf (COTS) or other predeveloped software. Before going this route, the project team must understand the users' business process and needs. The users must understand that they must make trade-offs between custom functionality and the ability to satisfy their mission with cheaper, faster methods such as the incorporation of COTS. One way to facilitate consideration of these other methods is for the project team to help the stakeholders understand exactly what requirements are must-haves and which ones can be deferred or negotiated in favor of cheaper or faster solutions.

Note that "other predeveloped software" might include, in addition to COTS, legacy systems, services such as those provided by Service-Oriented Architectures (SOAs), and reusable components.

Practice 3: Collaborate Across Teams

Collaboration is much more than just communication. It means building teams that share risk and reward, working cooperatively to further a project's mission and goals. It means teams proactively providing information to other teams when that information may affect or assist the other team. It also means creating a culture of integrity in which team members are empowered and are willing to take risks as well as responsibility. Because of the importance of team building, Chapter 4, "Best Practices for Staffing the Outsourcing Organization's Project Management Office (PMO)," is devoted to building a Project Management Office (PMO) in the outsourcing organization. Chapter 5, "Best Practices for Staffing the Contractor's Software Project Team," is devoted to creating a team in the contractor organization.

Note that collaboration across teams also means including organizations that often are ignored or are peripherally involved. This includes the people who will operate the system being built, and the associated business organizations (including, but not limited to, the contracts office).

Practice 4: Demonstrate Value Iteratively

Demonstrating value iteratively is perhaps the most significant of the six best practices. Similar to the discussion of the Waterfall lifecycle, let's discuss the major tenets of iterative development:

  • You can better develop a complex software system by breaking it into a series of smaller problems to solve. This is applied common sense. When you're faced with a complex problem, it's much easier to "divide and conquer" by creating a series of less complicated problems. In the aggregate, you solve the entire problem by solving the individual problems first. This helps you deal with complexity by allowing you to focus on one subset of the problem at a time.

  • It is impossible to fully understand a system's requirements (or complexities) by writing everything down first. Documentation is still important, but it doesn't prove that you have identified the proper requirements and have a thorough grasp of the complexities involved in building the system. Furthermore, it is easier for stakeholders to recognize that development is on the right track if they get "hands-on" time with a partial implementation of the system. Most users do not understand (nor do they care to) design and analysis artifacts written on paper or produced on slides.

  • Greater risk identification and mitigation are possible if more of the activities of a software project are exercised earlier in the project lifecycle. As you will recall, in the Waterfall lifecycle model, many important activities (such as implementation and testing) are not exercised until well into the project lifecycle. This means that problems discovered through the conducting of these activities are not identified until that time. In contrast, with iterative development, these activities are exercised earlier. Earlier problem detection leads to earlier resolution. In addition, the iterations' content is ordered by risk. In other words, the earlier iterations in a project should focus on the aspects of the system that have the most risk. I will cover this more in Chapter 8, "Identifying and Managing Risks." To put it another way, if development of a system is destined to encounter serious problems, you want to force the problems to occur as early as possible. This allows you to regroup and determine how to solve the problems while meaningful amounts of time in the schedule and numbers of resources remain.

  • Each iteration should result in a demonstrable, executable release of a system. Of course, only a subset of a system's functionality will be provided in each release. Each release should be demonstrated to stakeholders wherever possible.

  • Creating executable, demonstrable versions of a system early in the project lifecycle provides a more meaningful and accurate way to gauge the project's true progress. Projections can then be made based on actual experience with developing parts of the application, rather than guesses based on documentation alone.

Advantages of Demonstrating Value Iteratively

Iterative development offers the following advantages over Waterfall development:

  • It provides earlier discovery of serious problems and risks than traditional Waterfall-based lifecycles.

  • It is more adaptable to changes in requirements than Waterfall lifecycles. The requirements are frozen only for the duration of a single iteration, and only the requirements affecting that iteration are frozen. Any changes to requirements in a current or previous iteration can be planned for a subsequent iteration. Also, for all requirements not yet implemented in an iteration, the priorities can be reassigned, and the order of requirements assigned to iterations can be reordered, or new requirements can be incorporated in the order.

  • It allows stakeholders to have "hands-on" time with the application well before the project's end date. Thus, useable feedback can be obtained earlier in the process, allowing the development organization to adjust if the reaction is negative.

  • If the schedule becomes a problem, it may be possible to deploy an early version of a system, minus some of its features. Because each iteration delivers an executable release, users may be content to deploy an earlier version of a system on time rather than wait for the development of every single feature. This is not possible with a Waterfall lifecycle model.

Refer to Chapter 10, "Construction Iterations: Staying on Target," for suggestions on the proper length of time for iterations.

Practice 5: Elevate the Level of Abstraction

Many software systems built today are quite complex. Even experienced software engineers cannot adequately cope with all the details at one time. The notion of abstraction helps you deal with complexity and understand a system's architecture.

You're probably familiar with object-oriented paradigms, which introduce the notion of abstraction. One important concept used in object-oriented methodologies is the idea of a class. A common use of a class is to encapsulate a data object and provide functions that manipulate the data object in some manner. In this fashion, the data object's important attributes and operations can be accessed in a controlled fashion, the way the author of the class intended. The implementation details are programmatically hidden from users of the class.

A component extends this concept and broadens its use to a single unit of functionality. Continuing the example of a class, a component could be a collection of classes that are logically grouped. Each component is a self-contained "chunk" of functionality that has a well-defined interface and that does something of value for a system. The interface represents the attributes and operations that the developer of the component believes are necessary for a client to effectively use the component.

Figure 2-3 shows an example of a three-tiered architecture. Note that all access between components occurs at the application programming interface (API) level. In other words, the components interact with each other, but all calls from one component to the other are made to the component's interface.

Figure 2-3. An example of a layered architecture with APIs


Figure 2-4 illustrates another example of a layered architecture, with each layer dedicated to a specific aspect of the application's functionality. Interaction between components in other layers occurs only with layers immediately adjacent to the level initiating the interaction.

Figure 2-4. Another example of a three-tiered architecture, illustrating layers showing separation of concerns


Applications that use component-based architectures reap several benefits:

  • A component's API clearly defines the boundary between the component itself and the users of the component.

  • Component-based architectures help you understand how a system is designed at a high level, without getting lost in a sea of detail.

  • Components are easier to reuse. In fact, entire architectures are often reused from project to project. The notion of the three-tiered architectures shown in Figures 2-3 and 2-4 is quite common. Components also aid in reuse at the individual component level. Because a component performs a specific function and has a well-defined interface, it's easier to reuse a component on other projects.

  • Component-based architectures are easier to maintain. Because components programmatically "hide" implementation details, and dependencies on a component are made at only the API level, changes to a component's implementation are less likely to affect client users of that component. This means that code changes are less likely to "ripple" throughout other portions of a system.

  • Component-based architectures enable portions of the system's functionality to be developed by separate teams. If the team strictly adheres to the conventions defined by the component's interface, the teams developing the code to implement the functionality can operate separately. This fact is particularly helpful for distributed teams.

Another way of managing complexity is by reusing existing systems or COTS packages, such as databases, reusable components, and various mechanisms.

Practice 6: Focus Continually on Quality

Usually, the first discipline that comes to mind with this best practice is testing. Yet in the RUP, opportunities for improving and measuring quality occur in all the disciplines. We will cover some of the key disciplines here.

The Project Management Discipline

One of the project manager's primary duties is to keep the project team focused on the right goals at the right time. There are a number of ways to do this. In particular, two key artifacts identified by the RUP stand out: the Iteration Plan and the Iteration Assessment.

The Iteration Plan is a detailed plan that identifies what is to be accomplished during a specific iteration. This includes a list of risks that need to be investigated or addressed, a subset of requirements that should be implemented, and possibly some change requests that must be addressed.

The Iteration Assessment is a balanced, intellectually honest evaluation that examines the goals set forth in the Iteration Plan. It determines whether the iteration's goals were met. This is more than simply whether the requirements allocated to the iteration were successfully implemented. More important is whether the risks investigated through the iteration's activities were successfully resolved or mitigated. Also, were new risks identified, or did new problems arise? The results of the Iteration Assessment are used to help plan the remaining iterations.

Other measures available in the Project Management discipline include some of the traditional measures, such as tracking actual resources expended versus planned resources, earned value metrics, and so on. It is important to note that the RUP stresses adaptive planning instead of predictive planning. Projects using iterative lifecycle models (such as the RUP) should not be planned in detail for the project's duration in the beginning. Instead, detailed plans are created only for the current iteration and perhaps the next one. Detailed plans for subsequent iterations are created along the way to incorporate lessons learned, new requirements, and so on. Therefore, attempting to create detailed plans for the entire project and tracking adherence to that plan is not meaningful. In other words, if you must track adherence to schedule or planned resources due to contractual requirements, deviating from the plans made at the beginning of the project does not necessarily mean that the project is in trouble. It is important to adapt plans as the project is executed.

The Requirements Discipline

The keys to verifying quality in the requirements discipline involve careful review of document requirements from three perspectives:

  • Do the documented requirements truly reflect the needs of the stakeholders?

  • Are the documented requirements testable or verifiable?

  • Are the documented requirements of sufficient detail that they are unambiguous and can be understood by the developers? Can they be understood by other stakeholders as well?

In addition, requirements should be baselined when they reach a sufficient point of stability. Beyond that point, any changes must be carefully documented and controlled, and the results communicated throughout the team, preferably through an automated tool designed for tracking changes.

The Analysis and Design Discipline

The key ways of verifying quality here involve reviewing the artifacts created, with particular attention given to the following:

  • Do the Analysis and Design artifacts trace directly back to the requirements artifacts? The goal here is not to trace every individual piece back to a requirement, but rather to understand its role in accomplishing the requirements.

  • Are the artifacts consistent in the level of detail?

  • Can they be understood by the developers who will translate the design into executable code?

The Testing Discipline

In the Waterfall lifecycle model, testing does not take place until very late in the lifecycle. It's really impossible to test any earlier, because nothing is available to test! When a project is running behind schedule, because testing is one of the final activities, it is often cut short or even eliminated. As a result, the product is often riddled with bugs. In contrast, with the iterative life-cycle model, testing takes place within each iteration, especially toward the end of the iteration. Defects found are corrected, tests are reverified, and a stable baseline is created for demonstration to stakeholders.

Figure 2-5 illustrates this process. In the iterative lifecycle model, each iteration has a specific goal, and specific requirements are allocated to it. Testing can begin, even in iterations within the Elaboration phase.

Figure 2-5. Iterations with tested baselines that can be demonstrated


RUP Lifecycle Phases

In the RUP, the project lifecycle is divided into four phases: Inception, Elaboration, Construction, and Transition. Each phase has a different emphasis that affects the content of the individual iterations within the phase. Figure 2-6 illustrates the four phases, together with the trends of levels of effort within each discipline throughout the project lifecycle.

Figure 2-6. RUP lifecycle phases

Derived from The Rational Unified Process Made Easy: A Practitioner's Guide to the RUP, by Per Kroll and Philippe Kruchten


Goals of Each Phase in the RUP Lifecycle

Each phase in the RUP lifecycle has a specific purpose and goal. Let's examine each.

The Inception Phase

The goal of the Inception phase is to reach the lifecycle objectives milestone. At this milestone, you decide whether to continue with the project, change its scope, or cancel it. You do this by carefully examining evaluation criteria such as these:

  • Do stakeholders agree on the system's key requirements? At this point in the project lifecycle, in the RUP, not all the detailed requirements are known, but the key high-level requirements are, especially those detailing the system's scope. Exactly who are the system's stakeholders? Who are the users? How many people will use the system? Which business processes will the system manage? Will the system be Web-based or some other type of system?

  • Do the stakeholders agree with the cost and schedule required for producing the system?

  • Have the significant risks been identified and mitigation plans developed for each risk? Do the stakeholders understand the risks and the consequences if the risks materialize?

Of particular importance in this phase, the project business case, vision, and list of risks should be developed in writing and approved by the key stakeholders.

The iterations developed in the Inception phase are often the most difficult, because they may be exploratory in nature or may serve as proof-of-concept releases. Therefore, it is not uncommon for the results of iterations developed in the Inception phase to be throwaway artifacts. However, the lessons learned in these early iterations are key.

On the other hand, if the project's goal is well understood, and the development team has built similar systems before, the Inception phase may have no iterations. This is the only lifecycle phase that might not have iterations.

The Elaboration Phase

The goal of the Elaboration phase is to identify, prove, and baseline the architecture of the system that is to be developed. This baseline is called the lifecycle architecture milestone. This is done through iterations that address requirements affecting the architecture. Key attention is paid to functional requirements and to supplemental requirements that drive the system's architecture. For example, how many concurrent users must the system support? Are there response time requirements? What about system reliability? Is the system mission-critical? The content of iterations conducted during the Elaboration phase helps prove that the system's architecture is viable. It's critical that the focus remain on the architecture. If an architecture is chosen that does not meet the supplemental requirements, the system will ultimately fail after delivery, no matter how well the subsequent phases go. Key exit criteria for the Elaboration phase include the following:

  • Artifacts produced in the previous phase, such as the product vision, business case, and key requirements, are stable.

  • The architecture chosen for the system is identified and proven through executable releases produced by iterations exercising key system requirements, and risks are identified and mitigated. This includes test results for each release produced during the Elaboration phase.

  • Most of the system's detailed requirements are identified by the conclusion of Elaboration. Note that some detailed requirements still might be unknown, but the ones most important to the user are known. There is no exact number, but I prefer to have approximately 80% of the system requirements identified by this point.

  • Trends involving expenditure of resources (time and budget) are acceptable to the project stakeholders.

  • Iteration plans for the Construction phase (particularly for the early Construction iterations) should be in place by the conclusion of Elaboration. You should have a detailed plan for the earliest Construction iterations, such as the first and perhaps second iteration. After that, any plans should be high-level at the most. The reason, of course, is that discovery in the first iterations may lead you to change the subsequent iterations. There is no point in creating detailed plans for iterations that are very likely to change.

The results at the conclusion of the Elaboration phase are evaluated. If the exit criteria show that acceptable results have been achieved, the project proceeds to the Construction phase.

The Construction Phase

On RUP-based projects, the majority of the time and project resources are expended in the Construction phase. By this point, all the major risks of developing the system have been identified and mitigated, the architecture has been determined, and most of the system requirements have been identified. The goal is to produce a new, stable release at the conclusion of each iteration that contains more and more implemented functionality. Testing also occurs during each iteration, regardless of phase. This means that during an iteration, testing of new functionality begins as soon as it becomes available within the iteration. Regression testing of functionality built during previous iterations also takes place. This requires close coordination between testers and developers. Defects discovered during testing are documented and evaluated to determine their priority. The most serious defects are corrected immediately within the same iteration. Lower-priority defects may be deferred to later iterations if necessary. The goal is for each iteration to produce a release that is executable, demonstrable, and stable.

The goal of the Construction phase is to produce the initial operational capability. This is not necessarily a completely finished product, but rather one that implements all the system's key requirements. This may be called a beta release. Some things might be missing, such as help files and installation scripts, but the release can be used as a pilot release to gain useful feedback.

It can be helpful, when conducting high-level planning, to schedule one or two extra, "empty" iterations at the end of the Construction phase. These are iterations for which time is allocated in the schedule, but no requirements are allocated to them. This way, if additional requirements are identified, or if difficulties arise, requirements can be deferred to these empty iterations while the additional requirements or challenges are addressed. If no such circumstances arise, you can always deliver earlier than planned.

The Transition Phase

In the Transition phase, final iterations incorporate corrections for defects and other items, such as help files, installation scripts, some enhancement requests, and configuration and tuning. Other significant tasks might also be included, such as data migration if the product replaces a legacy system.

It is interesting to note that the Transition phase can be trivial or complex, depending on the nature of the product. A product that resides on a single system and involves only a handful of expert users colocated at one location is one extreme. On the other end, a very large distributed mission-critical system with thousands of users may have an extended Transition phase. This might include very close monitoring and involvement by the contractor, with a significant core group of developers and staff on alert, ready to address key problems that are discovered.

Is the RUP Agile?

In the late 1980s and 1990s, as the foundation was being laid for the RUP, a number of practitioners were experimenting with variations on iterative software development techniques. The focus was to eliminate much of the overhead and "ceremony" common with Waterfall-based lifecycles. The inspiration was the lean production techniques pioneered in the manufacturing world by companies such as Toyota. The goal was to create demonstrable releases quickly and frequently, using short (2- to 4-week) iterations. Other novel aspects involved having the software users work directly with the software team. In addition, the entire software team worked together in close collaboration. This enabled quick, efficient communication within the team as well as with the stakeholders. The process was highly adaptive and flexible. Planning for each iteration was performed based on the evaluation of the previous iteration's release. Even testing was performed during the iteration and was repeated frequently during builds, facilitated by extensive use of test automation.

These lightweight, flexible lifecycle models experienced success and attracted the attention of several prominent experts in the industry. In early 2001, a group of 17 of these practitioners met to discuss these methods. The group decided to name these methods Agile methods. Together, the group authored what became known as the Agile Manifesto:

We are uncovering better ways of developing
software by doing it and helping others do it.

Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.[2]


[2] © 2001, Kent Beck, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, Robert C. Martin, Steve Mellor, Ken Schwaber, Jeff Sutherland, and Dave Thomas. This declaration may be freely copied in any form, but only in its entirety through this notice.

Interestingly, several variations of Agile processes exist. Examples are Extreme Programming (XP), Scrum, Crystal, and others. They vary on factors such as typical iteration length, and other practices that have differing emphasis.

So, is the RUP Agile? The answer is that it can be. It depends on how the RUP is tailored. If the values expressed in the Agile Manifesto resonate with your team and your customer, consider the following points when tailoring the RUP:

  • Choose only the most vital artifacts needed by the customer and the development team. Strive to eliminate any unnecessary processes and documents. When in doubt, eliminate it.

  • Strive to keep iterations as short as possible. Remember that the goal for each iteration is to produce a demonstrable, executable release. The release should be tested during development so that the release delivered at the end of the iteration functions correctly.

  • Agile methods above all stress collaboration between all members of the development team, as well as customers or users. Whenever doubt on functionality or priorities exists, the customer is consulted and drives the decision.

  • If possible, have the customer work with the testers to define acceptance tests for the various features. These tests are run at the conclusion of the iteration, and the customer signs off on that functionality at the end of the iteration. The advantage of this approach is that it eliminates the need for a massive, "all-at-once" acceptance test by the customer at the end of the project. Even if the customer still wants a complete acceptance test, the likelihood of surprises is lower.

  • Agile methods stress automation of redundant tasks wherever it makes sense. In particular, testing needs to be automated. As iterations continue, the amount of functionality requiring testing increases (both regression testing of previously delivered functionality and testing of newly developed functionality).

RUP practices are completely compatible with Agile values. The key is to tailor the RUP to be as simple as possible and to focus on frequently producing releases that the customer reviews and accepts.




Project Management with the IBM Rational Unified Process(c) Lessons from the Trenches
Project Management with the IBM Rational Unified Process: Lessons From The Trenches
ISBN: 0321336399
EAN: 2147483647
Year: 2007
Pages: 166

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net