Chapter 1: Backdrop: The Science of Scrum


Chapter 1: Backdrop: The Science of Scrum

Overview

Software development is a complex endeavor. Of course, this news isn ‚ t very surprising because the universe is full of complexity. Most complexities we don ‚ t know about, and others we are content to leave unexamined. Some ‚ like the complex process by which pressure turns coal into diamonds ‚ take care of themselves . Others ‚ for example, commuting to work every day ‚ can tolerate some imprecision. However, it is impossible to ignore complexity in software development. Its results are ephemeral, consisting merely of signals that control machines. The software development process is entirely intellectual, and all of its intermediate products are marginal representations of the thoughts involved. The materials that we use to create the end product are extremely volatile: user requirements for a program the users have yet to see, the interoperation of other programs ‚ signals with the program in question, and the interaction of the most complex organisms on the planet ‚ people.

This book addresses the extraordinarily difficult process of creating software. In this chapter, I ‚ ll summarize a process for increasing the probability of successfully developing software. This process, Scrum, is devised specifically to wrest usable products from complex problems. It has been used successfully on thousands of projects in hundreds of organizations over the last 10 years . It is based in industrial process control theory, which employs mechanisms such as self-organization and emergence.

This book is about the ScrumMaster, the Scrum project manager who heads the Scrum project. The ScrumMaster provides leadership, guidance, and coaching. The ScrumMaster is responsible for teaching others how to use the Scrum process to deal with every new complexity encountered during a project. Because of the nature of software development, there is no shortage of complexities, and there is no way to resolve them without hard work, intelligence, and courage.

This chapter describes how empirical processes are used to control complex processes and how Scrum employs these empirical processes to control software development projects. When I say that Scrum helps control a software development project, I don ‚ t mean that it ensures that the project will go exactly as expected, yielding results identical to those that were predicted . Rather, I mean that Scrum controls the process of software development to guide work toward the most valuable outcome possible.



Empirical Process Control

Complex problems are those that behave unpredictably. Not only are these problems unpredictable, but even the ways in which they will prove unpredictable are impossible to predict. To put that another way, a statistical sample of the operation of these processes will never yield meaningful insight into their underlying mathematical model, and attempts to create a sample can only be made by summarizing their operation to such a degree of coarseness as to be irrelevant to those trying to understand or manage these processes.

Much of our society is based on processes that work only because their degree of imprecision is acceptable. Wheels wobble, cylinders shake, and brakes jitter, but this all occurs at a level that doesn ‚ t meaningfully impede our use of a car. When we build cars , we fit parts together with a degree of precision fit for their intended purpose. We can manage many processes because the accuracy of the results is limited by our physical perceptions. For example, when I build a cabinet, I need only cut and join the materials with enough precision to make them acceptable to the human eye; if I were aiming only for functionality, I could be far less precise.

What happens when we are building something that requires a degree of precision higher than that obtainable through averaging? What happens if any process that we devise for building cars is too imprecise for our customers, and we need to increase the level of precision? In those cases, we have to guide the process step by step, ensuring that the process converges on an acceptable degree of precision. In cases where convergence doesn ‚ t occur, we have to make adaptations to bring the process back into the range of acceptable precision levels. Laying out a process that repeatably will produce acceptable quality output is called defined process control . When defined process control cannot be achieved because of the complexity of the intermediate activities, something called empirical process control has to be employed.

It is typical to adopt the defined (theoretical) modeling approach when the underlying mechanisms by which a process operates are reasonably well understood . When the process is too complicated for the defined approach, the empirical approach is the appropriate choice.

‚ B. A. Ogunnaike and W. H. Ray,
Process Dynamics, Modeling, and Control [1]

We use defined processes whenever possible because with them we can crank up unattended production to such a quantity that the output can be priced as a commodity. However, if the commodity is of such unacceptable quality as to be unusable, the rework is too great to make the price acceptable, or the cost of unacceptably low yields is too high, we have to turn to and accept the higher costs of empirical process control. In the long run, making successful products the first time using empirical process control turns out to be much cheaper than reworking unsuccessful products using defined process control. There are three legs that hold up every implementation of empirical process control: visibility , inspection , and adaptation . Visibility means that those aspects of the process that affect the outcome must be visible to those controlling the process. Not only must these aspects be visible, but what is visible must also be true. There is no room for deceiving appearances in empirical process control. What does it mean, for example, when someone says that certain functionality is labeled ‚“done ‚½? In software development, asserting that functionality is done might lead someone to assume that it is cleanly coded, refactored, unittested, built, and acceptance- tested . Someone else might assume that the code has only been built. It doesn ‚ t matter whether it is visible that this functionality is done if no one can agree what the word ‚“done ‚½ means.

The second leg is inspection. The various aspects of the process must be inspected frequently enough that unacceptable variances in the process can be detected . The frequency of inspection has to take into consideration that processes are changed by the very act of inspection. Interestingly, the required frequency of inspection often exceeds the tolerance to inspection of the process. Fortunately, this isn ‚ t usually true in software development. The other factor in inspection is the inspector, who must possess the skills to assess what he or she is inspecting.

The third leg of empirical process control is adaptation. If the inspector determines from the inspection that one or more aspects of the process are outside acceptable limits and that the resulting product will be unacceptable, the inspector must adjust the process or the material being processed . The adjustment must be made as quickly as possible to minimize further deviation.

Let ‚ s take code review as an example of an empirical process control. The code is reviewed against coding standards and industry best practices. Everyone involved in the review fully and mutually understands these standards and best practices. The code review occurs whenever someone feels that a section of code or code representing a piece of functionality is complete. The most experienced developers review the code, and their comments and suggestions lead to the developer adjusting his or her code.

[1] (Oxford University Press, 1992), p. 364