The Seven Wastes


Everyone who has studied lean manufacturing has learned Shigeo Shingo's seven wastes of manufacturing.[11] In our previous book, we translated these seven wastes into the seven wastes of software development. In this section we revisit that translation, making a few changes presented in Table 4.1. Finding waste is not about putting waste in a category; categories help reinforce the habit of seeing waste. The real purpose of discovering and eliminating waste is to reduce costs and make our products more effective.

[11] Shigeo Shingo, Study of "Toyoda" Production System from an Industrial Engineering Viewpoint, Productivity Press, 1981, Chapter 5.

Table 4.1. The Seven Wastes

Manufacturing

Software Development

In-Process Inventory

Partially Done Work

Over-Production

Extra Features

Extra Processing

Relearning

Transportation

Handoffs

Motion

Task Switching

Waiting

Delays

Defects

Defects


Partially Done Work

The inventory of software development is partially done work. The objective is to move from the start of work on a system to integrated, tested, documented, deployable code in a single, rapid flow. The only way to accomplish this is to divide work into small batches, or iterations.

Examples of Partially Done Work

  1. Uncoded Documentation: The longer design and requirements documents sit on the shelf, the more likely they will need to be changed. Development teams are often annoyed as the need for requirements change, but from a customer perspective the real problem is that the requirements were written too soon.

  2. Unsynchronized Code: When code is checked out to personal workspaces, or branches are created for parallel development, these workspaces and branches will almost always have to be merged back together. Workspaces and parallel code lines should be synchronized as frequently as is possible because the longer they are separate, the more difficult it will be to merge them.

  3. Untested Code: Writing code without a way to detect defects immediately is the fastest way to build up an inventory of partially done work. When measuring how much code is done, there should be no partial credit. Either code is integrated, tested, and accepted, or it doesn't count.

  4. Undocumented Code: If documentation will be needed, it should be done as the code is written. Ideally, code should be self-documenting, but user documentation, help screens, and so on may also be necessary. Just as testers belong on the development team, so do technical writers. After all, these are the people who are going to help customers get their job done. A lot of extra features can be avoided if the technical writers are constantly asking the rest of the team: "And exactly how is that is going to help our customers get their job done?"

  5. Undeployed Code: Not every environment can deploy code as frequently as desirable, because new software can get in the way of customers getting their current jobs done. However, there should be a bias toward deploying code as soon as possible. In fact, it's often easier for users to absorb changes in small increments; less training is required and disruption can usually be minimized.


We have a big inventory of documentation but we can't change that because government regulations require us to have a Software Requirements Specifications (SRS) and traceability to code.

If you must have an SRS and traceability, then consider writing as much of the SRS as executable tests as possible. Consider using FIT (Framework for Integrated Tests) or a similar acceptance testing tool, to write specifications-by-example.[12] A tool like this can give you traceability of tests to the code for free. If you run the tests every day and save the output in your configuration management system, you will have a record of exactly which tests passed and which ones didn't at any point in time. Regulators will love this.


[12] See Fit for Developing Software: Framework for Integrated Tests, by Rick Mugridge and Ward Cunningham, Prentice Hall, 2005. See also www.fitnesse.org. Chapter 7 begins with a case study involving FIT and Fitnesse.

Extra Features

Taiichi Ohno emphasized that overproductionmaking inventory that is not needed immediatelyis the worst of the seven wastes of manufacturing. Similarly, the worst of the seven wastes of software development is adding features that are not needed to get the customers' current job done. If there isn't a clear and present economic need for the feature, it should not be developed.

Does this mean we can't anticipate what we know has to be done?

We are often asked this question. We offer these guidelines:

1.

Use your common sense. Remember these are guidelines.

2.

Focus on the customer's job. If it takes several iterations to create the software needed to do a job, features that are absolutely necessary to do that job are not extra features.

3.

Be strongly biased against adding features. If there is any question about the need for a feature, it's premature. Wait.

4.

Creating an architectural capability to add features later rather than sooner is good. Extracting a reusable services "framework" for the enterprise has often proven to be a good idea. Creating a speculative application framework that can be configured to do just about anything has a track record of failure. Understand the difference.


Relearning

Recently we were participating in a panel discussion about how agile software development affects customers when someone asked, "Are there any customers in the room? We should hear the perspective of real customers." One lonely person raised his hand. He was asked, "What makes agile development challenging for you?"

"I think," he said, "that the biggest problem I have is remembering what decisions I have already made and what things I have already tried, so I tend to try them over again."

Rediscovering something we once knew and have forgotten is perhaps the best definition of "rework" in development. We know we should remember what we have learned. Nevertheless, our approach to capturing knowledge is quite often far too verbose and far less rigorous than it ought to be. The complex topic of creating and preserving knowledge will be addressed further in Chapter 7.

Another way to waste knowledge is to ignore the knowledge people bring to the workplace by failing to engage them in the development process. This is even more serious than losing track of the knowledge we have generated. It is critical to leverage the knowledge of all workers by drawing on the experience that they have built up over time.

We have tried to document all of our design decisions as they are made. The problem is, this documentation is never looked at again.

This is a common problem, and it is the reason many organizations have stopped documenting design decisions altogether. The knowledge gained from trying things that do not work can be the most relevant knowledge for solving your problem, but capturing that knowledge in a manner that is easy to reference later is a challenge. Start by asking: Is what you are doing right now working? If not, then don't keep on doing it. Instead, revisit your overall objectives and work with your team devise a method of preserving knowledge that will accomplish those objectives in the most efficient and effective manner. See Chapter 7 for some ideas.


Handoffs

There's nothing quite like teaching a child how to ride a bicycle. First she has to learn balance while moving, so you run along beside her holding the bike lightly until she gets the hang of it. That's pretty tiring, so the moment she has just a bit of the feeling of balance you quickly teach her how to get started. The first time she successfully pedals away from you, it suddenly dawns on you that you have forgotten to teach her how to stop! A few more mishaps, and the child is on her own. A couple of hours later you are amazed at how confidently she is hurtling down the path, starting easily and stopping with brakes screeching.

Handoffs are similar to giving a bicycle to someone who doesn't know how to ride. You can give them a big instruction book on how to ride the bike, but it won't be much help. Far better that you stay and help them experience the feeling of balance that comes with gaining momentum. Then give some pointers as they practice starting, stopping, turning, going down a hill, going up a hill. Before long your colleague knows how to ride the bike, although she can't describe how she does it. This kind of knowledge is called tacit knowledge, and it is very difficult to hand off to other people through documentation.

When work is handed off to colleagues, a vast amount of tacit knowledge is left behind in the mind of the originator. Consider this: If each handoff leaves 50 percent of the knowledge behind (a very conservative estimate) then:

  • 25 percent of the knowledge is left after two handoffs,

  • 12 percent of the knowledge is left after three handoffs,

  • 6 percent of the knowledge is left after four handoffs, and

  • 3 percent of the knowledge is left after five handoffs.

Because tacit knowledge is so difficult to communicate, handoffs always result in lost knowledge; the real question is how to minimize that waste.

Some Ways to Reduce the Waste of Handoffs

  1. Reduce the number of handoffs.

  2. Use design-build teams (complete, cross-functional teams) so that people can teach each other how to ride.

  3. Use high bandwidth communication: Documents leave virtually all tacit knowledge behind. Replace them with face-to-face discussion, direct observation, interaction with mock-ups, prototypes, and simulations.

  4. Release partial or preliminary work for consideration and feedbackas soon as possible and as often as practical.


Task Switching

Software development requires a lot of deep concentrated thinking in order to get one's arms around the existing complexity and correctly add the next piece of the puzzle. Switching to a different task is not only distracting, it takes time and often detracts from the results of both tasks. When knowledge workers have three or four tasks to do, they will often spend more time resetting their minds as they switch to each new task than they spend actually working on it. This task switching time is waste.

Furthermore, trying to do multiple tasks at the same time usually doesn't make sense. Assume that you have three tasks to do, tasks A, B, and C. Assume for the sake of argument that each task takes a week. If you do the tasks one at a time, then at the end of the first week, task A will be done and begin delivering value. At the end of the second week, task B will also be delivering value, and at the end of their third week, you will be done with all three tasks and a lot of value will already have been realized.

But let's say that you decide to work on all three at once, perhaps to make the customer for each task feel that their work is important to you. Figure 4.2 shows the best case outcome for this scenario. Each task is divided into eight even parts, and a minimum amount of time spent in task switching. Even with this ideal case, none of the tasks will be done at the end of three weeks. In addition to the wasted time needed to complete the tasks, the potential value they could have contributed by being done earlier is also wasted.

Figure 4.2. Switching between three one-week tasks means none of them get done in three weeks.


We don't want a separate maintenance team, so we have to have task switching.

Any code that is deployed will need maintenance, and sometimes separate maintenance teams are formed so that developers can focus on development and not have to task-switch with maintenance tasks. However, we generally recommend against this, because we believe that it is best for a team to remain with its product over the product's lifecycle. Otherwise people may begin to believe that there is such a thing as "finishing" the code, which is usually a myth. Code is a living thing that will (and should!) constantly change.

Having a development team support its own code means that the development team will have to task-switch to handle support. Of course, this provides great motivation to deliver defect-free code, so the team can concentrate on new development.

Still, there will be support demands that distract developers, if for no other reason than the things customers think of as "defects" may have been "features" in the developer's mind. Here are some ways others have used to minimize disruptions caused by the need to support code:

  1. Have two people rotate off the team every month or every iteration to handle all maintenance for the duration.

  2. Set aside two hours each morning where the team jointly handles all issues that developed over the last 24 hours. Then have the daily status meeting and focus on new development for the rest of the day.

  3. Aggressively triage support requests and do only urgent work on an immediate basis. Set aside a period of time every week or two to address outstanding maintenance issues. If the period is short enough, most requests can wait, and some will even disappear. This technique helps level the maintenance workload.[13]

  4. Keep all customers on a single code base, and release a new version every week. Handle any and all problems with this release and require anyone with a problem to upgrade to the current version.


[13] We would like to thank to Bent Jensen for mentioning this approach, which he finds very useful.

Delays

Waiting for people to be available who are working in other areas is a large cause of the waste of delay. Developers make critical decisions about every 15 minutesand it's naive to think that all the information necessary to make these decisions is going to be found in a written document. A decision can be made quickly if the developer has a good understanding of what the code is supposed to accomplish, and if there is someone in the room who can answer any remaining questions. Lacking that, developers have three options: stop and try to find out the answer, switch to some other task, or just guess and keep on going. If the hassle factor of finding out the answer is high, the developer will take the second or third course of action. If there isn't much penalty involved, developers are likely to spend a good deal of time waiting for answers before they proceed. None of these approaches is good.

Complete, collocated teams and short iterations with regular feedback can dramatically decrease delays while increasing the quality of decisions. This is not the only approach to reducing delays, but no matter where team members are physically located, it is important to make sure that knowledge is available exactly when and where it is needednot too soon, or it will have to be changed, and not too late, or it will have to be ignored.

How can software take so long?

From a customer perspective, there are bigger delays than developers waiting for answers to questions, and these can be far more problematic.

1.

Waiting for me to know exactly what I want before they get going on solving my problem. How am I supposed to know?

2.

Waiting for months for project approval.

3.

Waiting forever to have people assigned.

4.

Waiting for the assigned people to be available.

5.

The annoying change approval process; they made me wait months, and they think nothing has changed in my business?

6.

Waiting for the whole system to be done before I can get the key features that I really need right now.

7.

Waiting for the code to pass testshow can this take so long?

8.

Waiting for that new piece of software to stop messing up my existing programsor vice versawho knows?


Defects

Every code base should include a set of mistake-proofing tests that do not let defects into the code, both at the unit and acceptance test level. However, these tests can only prove that the code does what we think it should do and doesn't fail in ways that we anticipated. Somehow software still finds devious ways to fail, so testing experts who are good at exploratory testing should test the code early and often to find as many of these unexpected failures as possible. Whenever a defect is found, a test should be created so that it can never happen again. In addition, tools may be needed to test for security holes, load capability, and so on. Combinatorial test tools can also be very useful. We should attempt to find all defects as early as possible, so that when we get to final verification, we do not routinely find defects. If software routinely enters final verification with defects, then it is being produced by a defective process.

A good agile team has an extremely low defect rate, because the primary focus is on mistake-proofing the code and making defects unusual. The secondary focus is on finding defects as early as possible and looking for ways to keep that kind of defect from reoccurring.

But the real reason for moving testing to the beginning of development is deeper than mistake-proofing. Acceptance tests are best when they constitute the design of the product and match that design to the structure of the domain. Unit tests are best considered the design of the code; writing unit tests before writing code leads to simpler, more understandable, and more testable code. These tests tell us exactly and in detail how we expect the code and the ultimate product to work. As such, they also constitute the best documentation of the system, documentation that is always current because the tests must always pass.

Isn't such extensive automated testing a waste?

Shigeo Shingo says that "inspection to prevent defects" is absolutely required of any process, but that "inspection to find defects" is waste.[14] Development teams have found that preventing errors from ever being introduced and never building good code on top of bad is always faster in the end. Furthermore, a test harness makes it easy and safe to change code over time, multiplying the benefits. Finally, organizations that used to write extensive requirements specifications find that writing tests as executable specifications can take considerably less time than they used to spend on writing specifications and tracing them to code.

We can't test in the customer's unique environment, so we always find defects upon installation.

Sometimes you can't do final integration testing in a real world customer environment, so you find defects at installation. But even in this case, you should launch an improvement program which itemizes the most common instances of failure upon installation and then attacks the root causes of those instances, highest priority first, until defects at a customer site become a rarity.

Several companies we know make installation part of the development iteration: An iteration is not done until the software is running in production at a customer site. With this approach, the entire development team is engaged in the customer installation. Time is set aside to be able to respond to customer requestsand no attempt is made to distinguish defects from featuresanything the customer wants is accommodated as far as practical.


[14] Shingo, Ibid., p. 288.




Implementing Lean Software Development. From Concept to Cash
Implementing Lean Software Development: From Concept to Cash
ISBN: 0321437381
EAN: 2147483647
Year: 2006
Pages: 89

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net