Feedback


Customers who wanted new information systems used to be told that it would take a lot of time to develop high-quality software. But for some customers, this was simply not acceptable. Getting the software they neededquicklywas essential for survival, yet they could not tolerate poor quality. When faced with no alternative but to break the compromise between quality and speed, a few companies have discovered that there is a way to develop superb software very fastand in the process, they have created an enduring competitive advantage.

When it becomes a survival issue and old habits are not adequate to the task, better ways of developing new products have been invented out of necessity. In every case, these more robust development processes have two things in common:[1]

[1] See Kim B. Clark and Takahiro Fujimoto, Product Development Performance, Harvard Business School Press, 1991.

  1. Excellent, rapid feedback

  2. Superb, detailed discipline

The Polaris Program[2]

[2] The Aegis Program and the Atlas Program have similar histories; however, the Polaris Program has superior independent and unbiased documentation.

On October 4, 1957, a crisis hit the US Defense Department. The Soviet Union successfully launched Sputnik I, the first artificial satellite to orbit the world. The surprised American public reasoned that since the Soviets had already fired short-range missiles from surfaced submarines, and now that they had the technology to make long-range missiles, adding the two together would result in the capability to launch a nuclear weapon that could penetrate the country's defenses. The crisis intensified a month later, when Sputnik II was launched. This was a much bigger satellite; it even carried a dog named Laika.

The US Navy had just started the Polaris program, a program to develop submarines that could launch missiles while submerged. The first Polaris submarine was scheduled to be operational in 1965. Obviously, taking nine years to launch the submarine was no longer acceptable. Two weeks after Sputnik I circled the world, the deadline was changed to 1959. Two-and-a-half years later, on June 9, 1959, the first Polaris missile was launched from a submarine. By the end of 1960 two Polaris submarines were patrolling at sea.[3]

[3] "The Polaris: A Revolutionary Missile System and Concept," by Norman Polmar, Naval Historical Colloquium on Contemporary History Project, at www.history.navy.mil/colloquia/cch9d.html, January 17, 2006.

How could such a technically complex objective that was supposed to take nine years be accomplished in such a short time? To begin with, the minute the deadline changed, Technical Director Vice Admiral Levering Smith focused his team on one simple objective: Deploy a force of submarines in the shortest possible time. This meant no wasted time, no extra features, and no delaysin short: Make every minute count.

Within weeks, Vice Admiral Smith had commandeered the two submarines currently under construction as attack submarines, and had them stretched out 130 feet (about 40 meters) to provide a place for missiles.[4] Since 16 missiles would fit in the newly stretched submarines, 16 missiles is the standard number of missiles in a submarine to this day. Admiral Smith needed the submarines right away because he had modified the development objective from creating the ultimate system in nine years to creating a progression of systems: A1, A2, and A3.[5] The A1 version would contain technology that could be deployed in about three years. The A2 version would be developed in parallel, but proceed more slowly to allow it to use more desirable technologies. The A3 version would incorporate everything learned in the development of the earlier versions.

[4] Ibid.

[5] See Harvey Sapolsky, The Polaris System Development: Bureaucratic and Programmatic Success in Government, Harvard University Press, 1972.

Within each version, Admiral Smith orchestrated rapid, sharply focused increments of technical progress. He carefully controlled the systems design and tightly managed the interfaces between components, personally signing the coordination drawings. But rather than manage the details of the subsystems, he had several competing subsystems developed in virtually all areas of technical uncertainty. This allowed him to choose the best option once the technology had been developed. Finally, Admiral Smith demanded the highest reliability, so from the beginning Polaris submarines were tested thoroughly and had built-in redundancy.

As historian Harvey Sapolsky notes:[6]

[6] Ibid., p. 11.

The Polaris program is considered an outstanding success. The missile was deployed several years ahead of the original FBM [Fleet Ballistic Missile] schedule. There has been no hint of a cost overrun. As frequent tests indicate, the missile works. The submarine building was completed rapidly. Not surprisingly, the Special Projects Office is widely regarded as one of the most effective agencies within government.

The Polaris project gave us PERT (Program Evaluation and Review Technique), an innovative new scheduling system developed for the managing the Polaris program. PERT has been widely regarded as the reason for Polaris's remarkable success, but Harvey Sapolsky calls this a myth.[7] Sapolsky makes the case that the PERT system, at least in the early stages of the Polaris project, was mostly a façade to assure continued funding of the program. In those early years, PERT was unreliable as a scheduling system, because the technical objectives changed so fast and frequently that the PERT charts could not be kept up to date. Later on, the Polaris program was a victim of its own publicity; eventually its managers were required to use the PERT system that they had largely ignored and would have preferred to abandon.

[7] Ibid., Chapter 4.

Sapolsky attributes the success of the Polaris program not to PERT, but to the technical leadership of Admiral Smith and his laser-like focus on synchronized increments of technical progress, the option-based approach to developing components, the emphasis on reliability, and a deep sense of mission among all participants.[8]

[8] Ibid., Chapter 5.

Release Planning

When political events suddenly collapsed the Polaris program timeframe, the first thing Admiral Smith did was switch from a nine-year plan for the perfect system to an incremental plan for a minimalist system that would grow increasingly better. He planned to demonstrate the most risky capability (submarine-launched missile) one-quarter of the way into the program. Then he would develop the simplest system that could possibly work and deploy it as soon as possible (A1). At the same time, he would develop a better version and deploy it as a rapid follower (A2). Once the first version was deployed and the second version was well underway, he would start developing the "ultimate" version (A3). This approach resulted in significantly more capability, faster delivery, and lower cost than the original plan (see Figure 8.1).

Figure 8.1. The Polaris timeline


If this approach is good for technically complex hardware, it's even better for software. Let's take a software development program through the same process.[9] The program starts with the identification of a market need (or business need), product concept, target cost, and launch timing. This establishes the general constraints of the program. People experienced with the customer domain and the available technology make a quick "rough order of magnitude" determination of what capabilities the organization can reasonably expect to develop within the constraints.

[9] For a thorough background on planning, see Mike Cohn's Agile Estimating and Planning, Addison-Wesley, 2005.

Instead of spending the initial investment time developing a detailed long-range plan, we take a short time to create an incremental release plan. We plan near-term development to check out the most critical features and establish the underlying architecture. We plan an early launch of a minimum useful set of capabilities to start generating revenue (or payback). We stage additional releases to add more capabilities on a periodic basis.

Figure 8.2 shows a nine-month release plan divided into six releases of feature sets. Each release is further divided into three two-week iterations to develop the feature sets.

Figure 8.2. Release plan for a nine-month effort with 18 two-week iterations and six releases


The goal of the first release is to develop the feature set that will establish feasibility and preliminary architecture, so it should create a thin slice through all layers of the application, showing how everything fits together. After that, feature sets should be chosen based on the following considerations:[10]

[10] Ibid., pp. 8087.

  1. Feature sets with high value before lower value

  2. Feature sets with high risk and high value before lower risk

  3. Feature sets that will create significant new knowledge before those already well understood

  4. Feature sets with a lower cost to develop or support before higher cost

A release plan gives you a basis for testing technical and market assumptions before investing a large amount of money in detailed plans. You can get an early reading of the plan's feasibility by starting implementation and tracking progress. By the first release you will have real data to forecast the effort needed to implement the entire project far more accurately than if you had spent the same six weeks generating detailed requirements and plans.

If you implement a release plan and find that you are behind at the first release, you should assume that the plan is too aggressive. The preferred approach at this point is to simplify the business process or product under development, simplify the technical approach to implementing the features, or remove features entirely. It is generally best to maintain a fixed "timebox" of iterations and releases and limit features to those that can be fully implemented within the timebox.

There are certainly environmentsgames for examplewhere incremental releases to the public are not feasible. But in most of these environments incremental development is still a very good idea. You just "release" the product internally to an environment which is as close to the production environment as possible. Getting a product "ready to release" every three months gives everyone a concrete handle on actual progress. This is a particularly good way to manage risk in custom software development.

Architecture

During the first month of the accelerated Polaris program, Admiral Smith moved quickly to put boundaries on the system. The missiles were going into a submarine, two submarines were about to be built, and the practical amount these submarines could be lengthened was 130 feet. Sixteen missiles would fit in this space, and without further ado the highest level architectural constraints were established. After that it did not take very long to create the high-level systems design (architecture). It was pretty much dictated by the limited space of submarines and the existing missile technology. The underlying principle of the system design was to tightly define and control interfaces while making sure that subsystems were complete feature sets that could be developed and tested independently by different contractors with a minimum of communication. This was necessary because Admiral Smith's small staff could not possibly get involved in the details of each subsystem, and in any case, giving contractor teams the freedom to be creative was necessary because the system required a host of innovations if it was going to work.

The starting point of large a complex system should be a divisible systems architecture that allows creative teams to work concurrently and independently on subsystems that deliver critical feature sets. A subsystem is a set of user-valued capabilities. It is not a layerthe idea is not to have separate teams develop the database layer, the user interface layer, and so on. A subsystem should be sized so that it can be developed by a team or closely associated group of teams.

As it becomes apparent that change tolerance is a key value in most software systems, architectures that support incremental developmentsuch as service-oriented architectures and component-based architecturesare rapidly replacing monolithic architectures. Even with these architectures there are still some constraints, particularly those that deal with nonfunctional requirements (security, performance, extensibility, etc.), that are best considered before feature sets development begins. However, the objective of a good software architecture is to keep such irreversible decisions to a minimum and provide a framework that supports iterative development.

Architecture itself can, and usually should, be developed incrementally. In Software by Numbers, Mark Denne and Jane Cleland-Huang make the case that software architecture is composed of elements, and these elements should be developed as they become required by the feature sets currently under development. In successful products whose lifetimes span many years, a major architectural improvement can be expected every three years or so, as new applications are discovered and capabilities are required that were never envisioned in the original architecture. It is time to abandon the myth that architecture is something that must be complete before any development takes place.

Iterations

Iterative development is a style of development that creates synchronized increments of technical progress at a steady cadence. Figure 8.3 depicts a typical iterative software development process.

Figure 8.3. Iterative development overview[11]


[11] Screen Beans art is used with permission of A Bit Better Corporation. Screen Beans is a registered trademark of A Bit Better Corporation.

Starting at the lower left of Figure 8.3, we find a backlog, a prioritized list of desirable features described at a high level. Shortly before it is time to implement a feature it is analyzed by team members who understand the customer domain and the technology. They break it into "stories,"[12] units of development that can be estimated reliably and completed within a few days. At a planning meeting, the team determines how many stories it can implement in the next iterationbased on its track record (velocity)and commits to completing these stories. During the iteration the whole team meets briefly every day to talk about how iteration planning is going, keep on track toward meeting its implementation commitment, and to help each other out. At the end of the iteration, the stories must be doneintegrated, tested, documented, and ready to use. A review meeting is held to demonstrate progress and obtain feedback, which may be captured as story or a change to the backlog. After a few iterations, a useful feature set is completed and ready to deploy.

[12] For the details of using user stories, see Mike Cohn's User Stories Applied, Addison-Wesley, 2004.

We'll now take a more detailed walk through a typical implementation of iterative software development (see Figure 8.4).

Figure 8.4. An example of iterative development


Preparation

Figure 8.4 begins with a backlog that is initially assembled at the beginning of the development effort. The backlog is a list of desirable features, constraints, needed tools, and so on. It is better used as a succinct product roadmap rather than a long queue of things to do. The backlog is dynamicthat is, it can be added to or subtracted from at will based on the team's learning. Each backlog item has a rough estimate, and the total of all estimates gives a ballpark estimate of the time to complete the effort. Someone (the champion, or in Scrum, the Product Owner) is responsible for keeping the priorities of the backlog current with business needs.

Backlog items start out as large-grain bullet points, since the lean approach is to delay detailed analysis until the last responsible moment. As items near the top of the priority list they need to be broken into smaller, more manageable pieces. A backlog item is not ready to be developed until its design is packaged as one or more stories. The people who will implement each story must understand the story clearly enough to reliably estimate its implementation effortreliable delivery requires reliable estimates.[13] Each story should have some clear value from the perspective of the business, but the criteria for sizing a story is its implementation efforta good story is typically one-half to four days of work. The people designing the product must decide if the value of the story is worth its implementation cost.

[13] For details on estimating stories, see Mike Cohn's Agile Estimating and Planning, Addison-Wesley, 2005.

Backlog items are usually features expressed in terms of business goals; high-level goals can be more like epics than stories. Team members who have a good understanding of the customers' job (Product Owners, analysts, etc.) lead the effort to break the epics into stories as they near the top of the priority list. Standard analysis techniques such as essential use cases combined with conceptual domain models, business rules and policies, and paper user interface (UI) prototypes are effective for thinking through this aspect of designing the product. If a use case is small enough, it may map to a story. Or it may take several stories over more than one iteration to realize the main scenario of a complicated use case with its associated interfaces, rules, and persistence.

The objective of iteration preparation is to design the portion of the "whole" product that will be developed next. Decisions need to be made about business rules, policies, workflow, functionality, and interface design. This design activity creates "just enough" carefully thought out stories "just-in-time" for the next iteration. A good story is a well-defined unit of implementer work, small enough so that it can be reliably estimated and completed within the next iteration. The objective is not to create extensive analysis documents for the entire product. Instead, the objective is to provide enough sample tests to make the business intent clear to the implementer.

Planning

At the beginning of an iteration there is a planning meeting. The whole team, in collaboration with the champion or Product Owner, estimates how long the top priority stories will take to develop, test, document, and deploy (or have ready for deployment). Team members pick the first story and commit to complete it. They pick a second story and decide whether they can commit to deliver this one as well. This process continues until the team members are no longer confident that they will be able to deliver the next story.[14] Team members commit to an iteration goal, which describes the theme of the feature set they agreed to implement during the iteration. No one tells the team how much work it should take on, but after a few iterations, each team establishes its velocity, which gives everyone a good idea of how much the team can complete in an iteration.

[14] Ibid.

Team members should regard the commitment as a pledge that they will work together as a team to accomplish the goal. Occasionally, a team may over-commit, or unexpected technical difficulties may arise. When this happens, the team should adapt to the situation in an appropriate manner, but they should also look for the root cause of the overcommitment and take countermeasures so that it does not happen again.

Implementation

During the iteration, the whole team works together to meet the iteration goal. Everyone attends a 1015 minute daily meeting to discuss what each member accomplished since the last meeting, what they plan to do by the next meeting, what problems they are having, and where they need help. The team interaction at this meeting provides sufficient information for individual members to know what to do next to meet the team goal without being told.

The preferred approach is story-test driven development (also called acceptance-test driven development). Working one story at a time, team members focused on designing the functional details, workflow, and user interface work with the developers to express precisely what the product needs to do in terms of test cases. This discussion leads to a definition of the relevant variables and the applicable policies and business rules from the domain. Using this shared language, they define specific behaviors in terms of inputs or sequences of steps and expected results.[15] While some team members flesh out a sufficient number of test instances, the developers create fixtures to interface the test cases to the application and they create the code to satisfy these test cases. When the developers need clarifications or additional detail, team members who understand the customer needs are available to discuss the issue and help define additional example tests to record the agreements reached. The full set of example tests becomes, in effect, a very detailed, self-verifying product design specification.

[15] See Eric Evans, Domain Driven Design, Addison-Wesley, 2003.

How FIT Works

In the book Fit for Developing Software,[16] Rick Mugridge and Ward Cunningham present the Open Source tool FIT as a tool to help teams communicate with each other by creating concrete examples of what the code should do. The examples are in table format. When text is added between the tables, the result is a readable specification. Developers write "fixtures," which connect the application code to the tables. The FIT framework executes the application to verify that each behavior specified in the tables is correctly implemented. Once a fixture is written, analysts and testers can write as many test cases as they want to assure the application is doing what is expected as its functionality grows. When FIT runs, it produces reports that are the input tables marked with green for tests that passed and red for tests that failed. This output can be kept on file for regulatory purposes or simply measured to determine how close to "done" the code is.

Tom Poppendieck


[16] Rick Mugridge and Ward Cunningham, Fit for Developing Software: Framework for Integrated Tests, Prentice Hall, 2005.

To implement behavior specified by a story test, developers use unit-test driven development. They start by selecting a simple behavior for the code that will contribute to the story-test objective. To implement that behavior the developers select suitably named objects, methods, and method parameters. Whenever possible they choose names consistent with the names used in the story-test. They document their design decision by writing a unit test, which fails until new code returns the intended behavior. They then write simple code to make this test pass without breaking any previous tests. The final step in the cycle is to assess the resulting code base and improve its design by refactoring it to remove any duplication, to make it simple and easy to understand, and to ensure that the intent of the code is clear. They repeat this cycle, adding each additional behavior until the selected story test passes. When the full set of story tests pass, it is time to go on to the next story.

A story is not done until the team updates associated sections of any user, customer, or regulatory compliance documentation as far as practical. There should be no partial credit for stories. Stories don't count until their fully working code is integrated, tested, documented, and ready to deploy. At the end of each iteration, the goal is to have complete, "releasable" features. Some teams go further than this and release software into production at the end of each iteration.

Assessment

At the end of an iteration, a review meeting is held for the team to show all concerned how much value they have created. (Applause is appropriate.) If the review turns up anything that needs to be changed, small issues become new stories while larger issues go into the backlog where they will be prioritized with other items. And without further ado, the planning meeting for the next iteration commences.

Notice that we have described three nested learning cycles, once for the entire iteration, once for each story, and once for each small piece for the code. (see Figure 8.5). The growing suites of story tests and unit tests express the current knowledge of how the product and the software need to work to do the customers' job. As the team learns, the test suites will need to be adapted to express the newly created knowledge. The magic is that the tests make the cost of change very low because if anyone unintentionally breaks a design decision made earlier, a test will fail immediately, which alerts the team to stop the line until they fix the code and/or update the tests, which express the design.

Figure 8.5. Nested learning cycles


Variation: User Interface

In practice most organizations modify the iterative process to suit their circumstances. One special context that may require modification is user interface development. In Chapter 3 we introduced Alias (now part of Autodesk), a 3-D graphics company whose products center on user interactions. Since excellent user interaction design is a key competitive advantage of the company, most development teams are assigned two dedicated interaction designers. During every iteration, these interaction designers have several jobs:

  • Gather customer data for future iterations

  • Test the user interaction developed in the previous iteration

  • Answer questions that arise as coding proceeds in the current iteration

  • Design in detail the user interface to be coded in the next iteration

Figure 8.6 shows how development proceeds. Before a user interaction design is coded, customer data is gathered, various options are tested through prototypes, and a detailed design ready for coding is finalized. Usability testing of the production code occurs one iteration after coding.

Figure 8.6. Iterative user interaction design[17]


[17] From Lynn Miller, director of user interface development, Autodesk, Toronto, used with permission. Originally published in "Case Study of Customer Input for a Successful Product," Experience Report, Agile 2005.




Implementing Lean Software Development. From Concept to Cash
Implementing Lean Software Development: From Concept to Cash
ISBN: 0321437381
EAN: 2147483647
Year: 2006
Pages: 89

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net