Mapping the Value Stream


As we mentioned in Chapter 2, Taiichi Ohno summed up the Toyota Production System thus: "All we are doing is looking at the timeline from the moment a customer gives us an order to the point when we collect the cash. And we are reducing that timeline by removing the non-value-added wastes."[15] The timeline that Ohno mentions can be drawn as a value stream map, a diagnostic tool frequently used in lean initiatives. We like to use the same timeline diagnostic in a development environment, but we change the start and stop points to reflect the different interaction of a customer with development.

[15] Taiichi Ohno, Toyota Production System: Beyond Large Scale Production, Productivity Press, 1988, p. 6.

Value stream maps always begin and end with a customer. In development, the clock starts on a value stream map when a customer places an orderexactly what this means will differ from one organization to the next. The clock stops when the solution is successfully deployedor launchedsolving the customer's problem. The value stream map is a timeline of the major events that occur from the time the clock starts until it stops.

The objective of lean is to reduce the development timeline by removing non-value-adding wastes. Value stream maps have proven remarkably effective at exposing waste, because delays in the flow are almost always a sign of significant waste. By looking for long delays (which indicate queues) and loop-backs (which indicate churn), a clear picture of the waste in the process emerges. In our classes, small teams do rough value stream maps in a half an hour. Even though these maps are rough guesses at reality, they are surprisingly useful in helping people understand the major issues in their processes.

Preparation

Choose a Value Stream

The first step in developing a value stream map is to decide what to map. The ideal is to map a process, not a single event, but this can be difficult in development. A good alternative is to map a single project that is representative of an "average" project or a class of projects. In choosing a value stream to map, group similar types of development together. For example, you might map how long it takes to go from product concept to launch of a medium-sized product. Or you might do a timeline for adding a high-priority new feature to an existing application.

Most software maintenance departments already understand how to group similar types of development together. Typically they divide maintenance requests into three categories and guarantee a response time by category. For example, they may guarantee resolution of an extremely urgent problem in two hours, an important problem in one day, and a routine problem may get relegated to the next biweekly release. Software maintenance organizations with service level agreements might teach us a lesson or two about value stream maps.

Choose When to Start and Stop the Timeline

The first question to answer is when to start the timeline. In new product development, it is typical to start the clock when a product concept is approved. However, this does not take into account the fuzzy front end of product development, so you might want to start the clock earlier, for example, when marketing recognizes the need. You should not start the product development clock any later than approval of product concept, even if your particular organization does not get involved until later. If you are working with embedded software, you should ideally start the clock with the main product not the software portion. The objective is to draw a value stream map from concept to cash: Start the timeline either when a customer need is identified or when the organization commits to developing a product, and stop the timeline when the product is launched.

When software is developed in response to customer requests, the timeline should generally be started when a request is submittedassuming that the request is the equivalent of placing an order. Usually you would not wait to start the timeline when a feature is approved, because in most cases the approval process should be included in the value stream. You are looking at value from the customers' point of view, and customers do not care about other requests or how busy you are, they care about how long it takes for you to act on their request. Back up one step and see if you can start the process from the customers' perspective. How do customers place an order?

Identify the Value Stream Owner

You can map a value stream without an owner, but you will get a lot more mileage out of value stream mapping if the exercise is led by the value stream owner. When we do value stream maps in our class there usually is no value stream owner. We find that the biggest problems always occur at organizational boundaries, where no one is responsible for the customers' request and people on each side of the boundary try to optimize their own local efficiency. It might languish in a queue before it gets approved, wait for ages to move from one function to another, or sit forever waiting for deployment. But unless there is a value stream owner responsible for the customers' request throughout the system, no one seems empowered to tackle these sources of waste.

Keep It Simple

Value stream maps are diagnostic tools to help you find waste; all by themselves they usually don't add a lot of value. They help you put on your customers' glasses and look at your development process through their eyes. They have a tendency to change your perspective and start up useful conversations. They are a good starting point for finding and eliminating waste.

Your objective is to map maybe ten or so major stepsfrom customer order to customer satisfiedon one or two sheets. It is more important to go from end-to-endfrom concept to cashthan to go into detail on one small area. Once you have finished the map, answer two questions:

1.

How long does it take to get a product developed or to fill a customer request? (You are looking for elapsed time, not chargeable hours.)

2.

What percent of that elapsed time is spent actually adding value? (This is called process cycle efficiency.[16])

[16] See Michael George and Stephen Wilson, Conquering Complexity in Your Business: How Wal-Mart, Toyota, and Other Top Companies Are Breaking Through the Ceiling On Profits and Growth, McGraw-Hill, 2004, p. 29.

We have seen many different value stream map formats, every one of them acceptable because they all generate a lot of insightful discussion about waste. You might make some notations about capacity or defect rates if that helps identify waste. Just don't lose site of the purpose of value stream maps in the process for creating them: Learn to see waste so that you can eliminate it.

Examples

Probably the best way to understand value stream mapping is to just do it. We find that people in our classes can dive in and create useful maps with a minimum of instruction. However, the value of the maps lies not in creating them, but in diagnosing what they are telling us. So we will sketch some maps that are typical of ones we have seen in our classes and discuss their implications.[17]

[17] Note that the maps are always hand drawn. We are using a text editor for clarity, but we recommend flip chart paper and pens for value stream maps.

Example 1

Example 1 (Figure 4.3) shows a value stream map of a small, high-priority feature change, and is taken from a value stream map drawn in one of our classes. The customer's request comes in by e-mail to a supervisor, who approves it in an average of two hours. It takes another two hours for a brief technical assessment, at which time it is assigned to a developer. The person describing this map to us then said: "Of course, a developer is available because this is high priority." Thus the development starts within an hour, and the two hours of work is completed promptly. It immediately goes to final verification and is rapidly deployed. Bottom line: A small, high-priority feature takes an average of eight hours from request to deployment. Two hours and forty minutes are spent actually working on the request, which is one third of the eight hours total time, giving a process cycle efficiency of 33 percent.

Figure 4.3. Value stream map of a small, high-priority feature change requestOrganization A


Many software maintenance organizations have processes similar to this. It is a very efficient process, as will be seen by comparison to the next example. But there is room for improvement, because the request waits for two hours for technical assessment. If the developer could also do this assessment, an average of two hours could be saved.

Example 2

Example 2 (Figure 4.4) is a value stream map for a request of about the same size as the request in Example 1; a simple feature change that takes about two hours to code and test. However, it takes more than six weeks to complete the work. From the customers' viewpoint, it takes an extra 15 minutes to write up a request because a standard form must be used that requires a lot more information. Since requests are reviewed once a week, the request waits an average of a half week before approval. Then the request waits an average of two weeks for one of the scarce architects, and after a technical review, it waits an average of two more weeks for developers to become available. After two hours of coding and testing, the request waits for an average of a week, because releases are scheduled for once every two weeks. Just before release there is a final verification. Even though the code was thoroughly tested when it was written, some code added to the release package in the last week has introduced a defect into the feature which went undetected until final verification. So it takes four hours to fix and retest the release, which, you will notice, is twice the time it took to write and test the code in the first place. Since verification only took 15 minutes in the previous example, the other three hours and forty five minutes are waste introduced by the process. Finally everything is ready to deploy, but it takes an average of another half week for the original requestor to get around to using the new feature in production.

Figure 4.4. Value stream map of a small, high-priority feature change requestOrganization B


Examples 1 and 2 are close approximations of real value stream maps that were done in the same class, so we were able to ascertain that the environments and problems were quite similar. Organization B agreed that if they had been using the process of Organization A, they could have completed their request in about a day.

There are two lessons to take away from these two examples. First, even though developers in both organizations are equally busy, Organization A was organized so that there were always some developers available to drop low-priority work to tackle high-priority requests. On the other hand, Organization B was focusing so hard on full resource utilization that the request had to wait twice in queues that were two weeks long. As we will see in Chapter 5, chasing the phantom of full utilization creates long queues that take far more effort to maintain than they are worthand actually decreases effective utilization. Organization A was able to eliminate the overhead of maintaining queues by using low-priority tasks to create slack and scheduling high-priority tasks Just-in-Time.

The second lesson is that periodic releases force us to accumulate batches of undeployed software. The biweekly releases of Organization B encouraged the development team to accumulate code changes for two weeks before integration testing. This is a mistake. Even if releases are periodic, integration testing should be done much more frequently. The goal is that the final test should not find defects; they should have been found earlier. If you routinely find defects at final testing, then you are testing too late.

It's pretty clear that the overhead of the queues maintained by Organization B, as well as the wasteful big-bang integration at the end, completely overwhelmed any phantom utilization advantage of their batch-and-queue approach.

Example 3

In Example 3 (Figure 4.5) we examine the value stream map of a fast-track project similar to one we saw in a class. Due to competitive pressure, the company was in a hurry to get features developed, so the features were divided into small projects and assigned to teams to develop. The analysts, developers, and testers worked closely together, so each team rapidly completed its task and sent their well-tested code to the quality assurance (QA) department for final verification. At this point the project teams were assigned new features, and they paid no attention to what happened next. But upon probing, we found that one person in the room knew what happened next: The QA department spent two to three months merging the multiple branches and resolving incompatibilities. This waste was more or less invisible, because it occurred after a handoff between geographically separate organizations, and no one seemed to be responsible for the software after it left the site for QA.

Figure 4.5. Value stream map of fast-track project split into multiple branches


Later in the class we did an exercise where we scored the basic disciplines of the company. (See exercise 4 at the end of Chapter 8.) All four groups in the company rated their configuration management discipline as 2 or lower on a scale of 05. We had never seen configuration management ratings so low. It turned out that most of the developers knew the configuration management system was not capable of tracking branches correctly, but the organization was branching more aggressively than any company we've seen. As can be expected, merging the branches took far longer than the coding itself.

In this case, the value stream owner was in the room, and when we did a future value stream map, he sketched how things were going to change. Since the branching was causing such problems, it would be abandoned and replaced with continuous integration tools, which already existed. He expected that the two to three months of integration testing could be reduced to a day or two.

Example 4

In almost every class, we see at least one value stream map similar to Example 4 (Figure 4.6). In this map, a request makes its way to a monthly review meeting after two months of analysis and waiting, only to be rejected twice before it finally obtains approval. This is symptomatic of a long queue of waiting work, so we usually ask how long that queue really is. After all, each project in the queue has been estimated, and it's easy to total up the numbers. We find that initial queues of undone work can be years long. We invariably suggest that most of the work in this queue should be abandoned, and the queue size should be limited to an amount of work that the organization can reasonably expect to do in the near term future. After all, there is usually no danger of running out of work. Maintaining queues of several years of work serves no purpose other than to waste reviewers' time and build up false expectation on the part of requesters. Investing time estimating projects that will never get done is also a waste.

Figure 4.6. Value stream map of medium-sized project in overloaded department


In Example 4, detailed requirements are not done until the project is approved, but once they are completed and approved by the department, there is still a two month wait before a development team is assigned. The team is not dedicated to the project, so it takes six months to complete about two months of coding. In addition, testing does not occur until after coding, so there are three cycles of testing and fixing defects. Once the feature passes its tests, it has to wait for the next release, which occurs every six months, resulting in an average wait of three months. When the tested modules are finally merged, it takes two months to fix all of the defects that occur between feature sets, since there was no continuous integration and ongoing testing as new features were added to the release.

Finally the software is ready to show to the users, a few months short of two years after they requested it. Not surprisingly, many of their requirements have changed. In fact, by the time the software is ready to release, a quarter of the high-level requirements are no longer valid, and half of the detailed requirements have changed. The development and testing teams interrupt current projects to change the code to do what the users really want, which usually takes only a couple of weeks. But when they are done the people in operations are not ready to deploy the software, so it takes about two more weeks before the feature is available in production. Twenty-one months have elapsed since the feature was requested, and we might generously say that four months of that time was spent adding value, giving a 19 percent process cycle efficiency. A less generous view of how much of the time was really spent adding value would yield a far lower process cycle efficiency.

Unfortunately, we see this kind of value stream map all the time. We hear that organizations are overloaded with work, but when we look at the value stream maps we see huge opportunities to increase the output of the organization by eliminating waste. In this example, if we just look at the churn, we see that 50 percent of the detailed requirements are changed. Requirements churn is a symptom of writing requirements too early. Requirements should be done in smaller chunks, much closer to the time they will be converted to code, preferably in the form of executable tests. In this example we also see churn in test and fix cycles, both during initial development and upon integration. This is a symptom of testing too late. Code should be tested immediately and then continuously plugged into a test harness to be sure no defects appear as other parts of the code base are changed.

Finally, we see that the six-month release cycle, which is supposed to consolidate efforts and reduce waste, only serves to introduce more waste. Features have to wait an average of three months after development to be deployed, giving them plenty of time to grow obsolete and get out of sync as new features are added. Meanwhile developers are off on other projects, so when they have to fix these problems, it takes time to get reacquainted with the code. The two week delay before deployment could also be reduced if people from operations were involved earlier in the process.

Diagnosis

Value stream maps are a timeline of the steps from concept to launch or from feature request to deployed code. They should depict average times for the typical steps in a process. Once the map is done, the first things to look for are churn and delays. Churn indicates a timing problem. Requirements churn indicates that requirements are being detailed too soon. Test-and-fix churn indicates that tests are being developed and run too late.

Delays are usually caused by long queues, indicating that too much work has been dumped into the organization. As we will see in Chapter 5, things move through a system much faster when work is limited to the capacity of the organization to complete it. Delays can also indicate an organizational boundary: A delay occurs when work is handed off to an organization that is not ready for it. The best cure for this is to get people from the receiving organization involved well before the handoff occurs.

There are other sources of waste that will be exposed by value stream maps: failure to synchronize, an arduous approval process, lack of involvement of operations and support. But these will show up only if you map the end-to-end process from concept to cash.

Future Value Stream Maps

Every organization we encounter has more work than it can possibly do. However, we generally find that far more work can be done, faster and with higher quality, by simply removing the enormous waste seen in most value stream maps. Toward the end of our classes we ask each group to create a future value stream map, using lean principles to redesign their process. We ask that maps depict a process that is practical for the organization to implement in a three to six month timeframe. No matter where the group started with their current value stream map, the future maps invariably shows improvements in process cycle efficiency and overall cycle time in the range of 50 percent to 500 percent.

Current value stream maps are relatively useless unless they are used to find and eliminate waste. Drawing a future value stream map is a good way to create a plan for removing the biggest wastes. However, we caution that a future map should not be an ideal map. It should show the path for immediate improvement. Pick the biggest delays or the longest queues or the worst churn and address it first. Draw a new map that shows where your organization can reasonably expect to be in three to six months with one to three key changes. Once those changes are made, it's time to draw a new current value stream map and to help pinpoint the next most important areas to address.




Implementing Lean Software Development. From Concept to Cash
Implementing Lean Software Development: From Concept to Cash
ISBN: 0321437381
EAN: 2147483647
Year: 2006
Pages: 89

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net