Practice: Low-Cost Change

Table of contents:

Practice Low Cost Change

Objective

The objective of low-cost change when applied to technical practices is to reduce the cost of iterative development throughout the development and ongoing support phases of a product.

Discussion

Most technical practices are specific to the engineering domain of the product. However, there are several practices that can be applied generically to many types of products, particularly those with both hardware and software components . These generic technical practices are important to the goal of creating adaptable productsdelivering customer value today and tomorrow. They are driven by the desire to keep the cost of change, the cost of experimentation, to a minimum, thereby greatly expanding the design possibilities. The four technical practices I will discuss in this sectionsimple design, frequent integration, ruthless testing, and opportunistic refactoringalso work in concert with each other. While there are many other technical practices, these four are critical to adaptability. [1]

[1] This is not a book on engineeringsoftware, electronic, mechanical, or otherwiseso it doesn't include specific technical practices. However, foundational skills in the various disciplines are critical to success. Software products don't get built without good software engineering skills. Electronic instruments don't get built without good electronic engineering skills. The four practices are adapted from Extreme Programming Explained: Embrace Change (Beck 2000).

But first, let's consider the phenomenon that makes these practices necessary: technical debt.

Technical Debt

When product development and support teams give lip service to technical excellence, when project and product managers push teams beyond quickness into hurrying, a technical debt is incurred. Technical debt can arise during initial development, ongoing maintenance (keeping a product at its original state), or enhancement (adding functionality). As shown in Figure 7.2, technical debt is the gap between a product's actual cost of change (CoC) and its optimal CoC. As I've said, the twin goals of APM are delivering customer value today and tomorrow. Managing technical debt helps achieve the tomorrow goal.

Figure 7.2. Technical Debt Results from Hurrying

graphics/07fig02.gif

For software products in particular, the actual CoC curve rises slowly at first and then accelerates rapidly after a few years. With software that has been around 10 years or more, developers are loath to touch the now "fragile" code. Constant pressure to cut the time and cost of enhancements, no allowance for periodic refactoring, and poorly maintained test data all contribute to fragility and the increased CoCwhich is exemplified by companies with 10- to 15-year-old products whose QA cycles extend for a year or more. Every product, software or otherwise , has a curve such as the one in Figure 7.2, but they will have different shapes and business implications.

The bottom line is that increasing technical debt directly reduces responsiveness to customers. Customers and product managers, internal and external, don't understand why a seemingly simple enhancement takes months and months, or even years, to implement. Yet their relentless pushing for features, features, features, faster, faster, faster is often the root cause of the problem. Without a firm dedication to long- term technical debt management, development groups are pressured into the increasing technical debt trap. As the debt gets worse , the delays become greater. As the delays lengthen, the pressure increases, usually leading to another hurried implementation, which increases the technical debt yet again.

Exiting this downward spiral is very difficult, because the longer the technical debt cycle goes on, the more expensive it is to fix. And fixing it is a political nightmare, because after spending significant time and money, the product's functionality won't be any greater than before (although defects will be reduced). The bigger the debt, the more expensive it is to fix, the more difficult it is to justify, and therefore the death spiral continues.

Conversely, there doesn't seem to be much incentive for narrowing the technical debt early in a product's lifecycle (when doing so is inexpensive), because the delays are still short. Nevertheless, the secret to long-term technical debt reduction lies in doing it early and often while the cost is low. The smaller the debt, the less expensive it is to fix, the less difficult it is to justify, and this virtuous cycle reinforces itself. Reducing technical debt, keeping the cost of change low, has to become an ingrained technical strategypart of an organization's dedication to technical excellence.

It must be noted that managing technical debt doesn't keep products from becoming obsolete. A technical debt strategy doesn't attempt to stave off eventual obsolescence but to keep the cost of change low so that customer responsiveness remains as high as possible during a product's life.

The historical approach to this issue was to get it right the first time and then hold on tight. Hold on tight, that is, to the original design or architecture, and resist meaningful change. When the pace of change was slower, this strategy may have worked, but in most product situations today, clinging to the past and resisting change don't work. Holding the cost of change down by not changing only means that when change has to happen, neither the product nor the people will be ready for it.

"OK," you may think. "This sounds good, but I'll never get my customers or management to invest the time or money in early technical debt reduction." I have three answers to this. First, the alternative will be to lose customers to competitors who are more responsive , especially new entrants into a market who are not burdened by old products. (Their time will come, but they will also steal your customers until then.) Second, a well-functioning agile team will work faster and at lower cost when practicing technical excellence. Third, even if practicing excellence costs a little more initially, the reduction in compliance work using APM will more than pay for the difference.

Simple Design

The objective of simple design is to keep the engineering team grounded in what is known rather than anticipating the unknown.

There are two fundamental approaches to managing changeanticipation and adaptationand good design strategies encompass aspects of both. Anticipation involves planning for the future and predicting what kinds of change are probable. Adaptation means waiting until requirements evolve or changes manifest themselves and then building them into the product. Adaptation also means experimenting, selecting those experiments with the best results, and incorporating those results into the product. The lower the cost of change, the lower the cost of experimentation, the higher the likelihood of significant innovation.

Building a tax-rate parameter into a software payroll system anticipates future changes in federal withholding rates. Componentizing electronic instruments anticipates using the instruments in configurations not currently foreseen. If there is a high probability that something will change, we should design the system to easily incorporate that change. This is a known type of future change. In hardware design a lot of effort is spent defining interfaces; hence subsystems act as black boxes and can easily be swapped in and out provided the interface does not change. Also, a good design leaves some unused bandwidth for future opportunities; for example, a backplane that contains a few unused signal and data lines. Other examples include using recognized standards and protocols, which allow greater flexibility, and using chip mounts that allow easy upgrading of CPUs/memory to the next generation. Since the cost of change is higher for hardware, adaptability often requires a modicum of anticipatory design.

Conversely, there are changes to the business environment that are very difficult to anticipate. For example, IT organizations immersed in developing client/server systems in the mid-1990s had little inkling of the Internet boom that would rapidly overtake those efforts. Companies that spent hundreds of millions of dollars on enterprise resource planning systems during this same time were concerned with internal integration of applications, whereas a few years later integration across companies became critical. Today, anticipating changes in the biotechnology industry explosion would be impossible . Dealing with these unanticipated , and often unforeseeable, changes requires adaptation.

Simple design means valuing adapting over anticipating. (Anticipating or planning isn't unimportant; it's just less important than adapting.) This means designing for what we know today and then responding to what we learn in the future. If our objective is an adaptable product, we should be able to demonstrate its adaptability during development by responding to new information. However, the extent to which this approach is useful depends on the malleability of the medium we are working in; software is very malleable (with good design), while certain types of hardware are less so. The more malleable the medium, the lower the cost of change, the easier it will be to tip the balance of anticipation versus adaptation toward the latter.

So malleability depends on low-cost iteration, but some components, even in software systems, are not very malleable. Therefore the balance of anticipation and adaptation must swing back toward anticipation. Platform (and product line) architectural decisions, for example, are often expensive and time consuming to change, and thus they should be approached from an anticipation perspective.

But even with less-malleable hardware systems, the advent of highly sophisticated simulation and modeling technology provides hardware designers with nearly as malleable an environment as software designersthe design of the Boeing 777 being a good example. Of course changes immediately became more expensive when the 777 transitioned from design to construction, but until that point, Boeing employed simple design practices (to a certain degree) through their creative and extensive use of simulation.

Practicing simple design has the added advantage of pointing out bottlenecks in the development process. Say you are building an electronic test instrument. Every time you want to change the design, quality assurance complains that integration and regression testing is so expensive and time consuming that it cannot get the cycle time for this testing under four weeks. You've just discovered a bottleneck, one that resists change by making it too expensive and time consuming. Rather than accept the QA position and forgo design changes, the better strategy would be to examine how to bring QA cost and test time down.

The effectiveness of simple design and refactoring indicates to a product team how adaptable its development process can be. Barriers to these practices are barriers to reducing the cost of change. The key question isn't "How much does it cost to implement these practices?" but "Can you afford not to implement them?" Note that simple design doesn't mean simplistic design. Often coming up with understandable, adaptable, simple designs takes additional time. Doing less, by eschewing nonessentials and focusing on customer value, can free up the time required to do bettersimple design.

Frequent Integration

The objective of frequent integration is to ensure that product features fit together into an integrated whole early and often during development in order to reduce both the high cost of late misalignment and the burden of testing. No matter what the productfrom software to automobiles to industrial control systemsthe less frequent the integration, the more susceptible the development effort will be to major problems late in the process and the more difficult, and expensive, it will be to find and fix them.

Consider some common problems with embedded software in industrial products. Hardware and software components never seem to be complete at the same time. Software engineers complain that hardware isn't available, while hardware engineers have the same complaint about the software. While software simulations and hardware prototypes can ease the situation for some products, they can both be expensive and oversimplify real-world situations. One company that was developing the embedded software for a cell phone ran into frustrating problems with hardware test equipment from a major vendor, slowing its testing efforts. An oil-field service firm found that simulations can't replicate all the variations of the real world, but "live" testing is also prohibitively expensive. Operating system and computer hardware developers seem to be constantly out of phase during development. At some level, integrating hardware and software will always be challenging and the problems only partially solvable. However, development teams need to strive for frequent integration to mitigate the problems.

Ken Delcol uses this approach at MDS Sciex in developing mass spectrometers. "We have just gone through this process. Our firmware group delivered firmware to the hardware group in iterations based on its testing schedule. Once sufficient functionality was confirmed, then the software group was brought in to add applications. With this approach we didn't need a fully populated digital board to begin firmware and hardware integration testing. We achieved a number of things (the best we have ever achieved): integration testing started sooner, hence issues were resolved more quickly (better schedule and cost); integration was continuous once minimal hardware was in place, hence no peak in resources; and communication was improved because all groups participated in the integration."

While companies establish cross-functional hardware teams, many software groups continue to operate separately. Again, if the development approach is a traditional, up-front, anticipatory one, this functional separation seems to make sensethe software group has its requirements and just needs to go do it. But this functional separation can be deadly to effective integration of software and hardware.

Changing to an agile development model can improve the flexibility of products containing both hardware and software. Figure 7.3 is an adaptation of a product development model from Harvard Business School professor Marco Iansiti. The diagram indicates that hardware development is typically broken into concept development, which at some point becomes frozen (at least for most purposes) because the cost and time delays of further design changes impact parts purchasing, manufacturing equipment acquisition, or downstream manufacturing set-up and hardware implementation.

Figure 7.3. Model of Product Development Cycles (Adapted from Marco Iansiti, Technology Integration)

graphics/07fig03.gif

Software development, on the other hand, can follow either a serial or an iterative approach. In the former, concept development (architecture, design, requirements) is frozensomewhat later than the hardware development, but it essentially uses the hardware model. The agile approach, rather than attempting to limit (freeze) requirements, takes advantage of software's flexibility by overlapping the concept development and implementation activities and thereby extends the "software freeze" until much later in the product development process. This allows the distinct possibility that late-discovered hardware flaws (or new requirements) can be implemented in software. Furthermore, the flexibility of software features, and often inexpensive throwaway features, can be used to advantage in the testing of key hardware components.

Because of the constant pressure on new product development cycle time, hardware engineers have been forced into earlier and earlier purchase ordering for parts. Design changes that cause alterations to parts already in the purchasing pipeline can be very expensive and time consuming. How the product is ramped up to production and the size of the initial production run will be big factors in the team's ability to change that hardware. Since there are no physical inventory considerations with software (although there are testing, configuration management, and other issues), software engineers can often make late changes cost effectively.

Iansiti provides an example of how frequently hardware problems were solved by software solutions during a Silicon Graphics workstation development project. When hardware problems were found, software workarounds were used 70% of the time, the problems became "features" 10% of the time, a combination of hardware and software changed 10% of the time, and pure hardware changes were involved only 10% of the time.

In serial models, "a distinct separation exists between concept development and implementation," says Iansiti. "This model works well when technology, product features, and competitive requirements are predictable." In an agile approach (which Iansiti calls a flexible approach), "the key to the process is in the ability to gather and rapidly respond to new knowledge about technology and application context as a project evolves . Technology integration capability is central to meeting the challenges of such unpredictable change" (Iansiti 1998).

In an agile approach, control occurs not by conformance to concept-driven plans, but by constant integration and testing of the evolving feature sets as they emerge during the product development process. Having a product architecture is important, but having good technology integration is vital to success. For this reason, architects need to be heavily involved in product integration.

Ruthless Testing

The objective of ruthless testing is to ensure that product quality remains high throughout the development process. [2]

[2] The term "ruthless testing" comes from colleague Kevin Tate. While a growing number of software developers use "test-first development," ruthless testing is a better generic term that is widely applicable to all types of products.

The old adage that quality can't be added on but must be built into the development process remains true. Extensive test instrumentation assists development of everything from cell-phone chips to automobile engines. Ruthless testing contributes to the goal of creating adaptable products because finding faults early, when there is still time to correct them, reduces the cost of change. When product developers wait until late into the lifecycle to test, the testing process itself becomes cumbersome rather than streamlined. Furthermore, the lack of constant testing removes a necessary feedback loop into the development process. Wait too long, and designs solidify . Then when tests are finally run, the team is unwilling to make design changesit's too expensive now! Constant ruthless testing, including acceptance testing, challenges the development teamno matter what the productto face the reality of how its design actually works.

In software development, ruthless testing includes software engineers performing constant unit testing, integrating quality assurance and acceptance testing into each development iteration, and having a full range of those tests automated. The ultimate goal is to produce a deployable , limited-feature product at the end of each iteration.

Opportunistic Refactoring

The objective of opportunistic refactoring is to constantly and continuously improve the product designmake it more adaptableto enable it to meet the twin goals of delivering customer value today and in the future.

A client was considering a multiyear, multimillion-dollar product redevelopment project for an existing software product20+ years in its evolutionwhich contained several million lines of code. While the product had been instrumental to business success, it was also viewed as an anchor to future progress. Maintenance and enhancements were taking longer and longer to implement, and the costs of integration and regression testing had increased substantially. At the same time, the company's customers were increasingly asking for shorter response times. Replacement was seen as the solution to the problems of a creaky old system. My caution to them was that the new product would face similar problems within five years if they didn't include a systematic product-level refactoring discipline into their development process.

Refactoring involves updating a product's internal components (improving the design), without changing externally visible functionality, in order to make the product easier to enhance in the future. One unfortunate legacy of serial development is the idea that reducing the cost of change depends on getting correct architectural and design decisions in the beginning. Given the constancy of change and our inability to predict those changes with any accuracy, our designs should instead be based upon what we know today and a willingness to engage in redesign in the future. Since it is inevitable that product enhancements are sometimes "bolted on" without proper design considerations, a refactoring discipline encourages teams to revisit these decisions periodically and correct them.

Another client who implemented agile development had a software product they developed to coincide with another company's major platform announcement. Although they utilized refactoring and ruthless testing, the immutable delivery date caused them to "hurry" a little more than they would have liked . So upon the first release, rather than diving right back into the new feature enhancements marketing was clamoring for, they spent two months refactoring and getting their automated tests in shape. Only then, with a technically solid product in hand, did they resume developing features for the next release. I asked the product development manager if they would have taken that two months prior to implementing agile development, and his answer was "no." Living with the pain of another legacy product for which the "technical debt" had gotten away from them, the team members were determined it would not happen on this new product.

System- or product-level refactoring can have enormous benefits over a product's life. Refactoring improves a product's design but does not add new functionalityit improves adaptability. Adding new functionality usually entails redesign, and refactoring should precede that redesign. The greater the rate of change in a product's features, the more quickly the product's architecture or design will degrade. A large redevelopment projectwhose justification relies on reducing the cost of changehas that very justification undermined when development and support teams (if they are separate) fail to instill an ongoing refactoring discipline.

The old axiom was "Get it right the first time to keep development cost down." The new axiom , one that makes more sense as business and product changes occur with greater frequency, is "No matter how good it is the first time, it's going to change, so keep the cost of change low." Refactoring should not, however, be used as an excuse for sloppy design. The objective isn't refactoring; the objective is to maintain a viable , adaptable design. This requires good design practices at every step.

Decisions about refactoring are difficult, because on the surface they appear to be technical decisions. But in fact they are product management and executive decisions, and they therefore need to be examined, and funded , from that perspective. Without the support of product managers, digging one's way out of a degraded product design will be nearly impossible. On the upside, however, I've found product managers amenable to investing in refactoring once they understand the situation. With customers clamoring for enhancements and development cycles lengthening because of technical debt, their current situations are often untenable.

In order to refactor, two factors are paramounttesting and persistence. One barrier to redesign and refactoring is the risk of breaking something that is already working. We reduce that risk by thoroughly integrating testing into the development process (not tacking it on at the end) and by automating testing to the greatest extent possible. Automated testing reduces the fear of breaking something that already works.

Which brings up the second factorpersistence. For software, it means considering doing a little code refactoring every time a change is contemplatedalways trying to leave the code slightly better than before. It means thinking about redesign during every development iteration and allocating some time to implement redesigns. It means planning some level of refactoring into every new product release. It means slowly, but surely, building up automated tests and integrating testing into the development process. For hardware, persistence means applying these practices to development as fully as possible, particularly for those parts of the development process that are accomplished by simulations.

Every investment requires an adequate return. Refactoring itself takes time and money. It can degenerate into endless technical arguments about "right" designs. But many product teams understand that the status quo isn't working any more. From product managers who lament the unresponsiveness of their products to customer feature requests , to developers who struggle to understand designs that no one wants to touch, to QA departments who are viewed as bottlenecks because their activities take months and months and months, to executives who watch their products slip in the marketplace, the incentives to rethink this topic are real. Persistence involves a constantiteration after iteration, release after releaseinvestment in refactoring, redesign, and testing to maintain a product that is as responsive to change as the marketplace demands.

The Agile Revolution

Guiding Principles: Customers and Products

Guiding Principles: Leadership-Collaboration Management

An Agile Project Management Model

The Envision Phase

The Speculate Phase

The Explore Phase

The Adapt and Close Phases

Building Large Adaptive Teams

Reliable Innovation



Agile Project Management. Creating Innovative Products
Agile Project Management: Creating Innovative Products (2nd Edition)
ISBN: 0321658396
EAN: 2147483647
Year: 2003
Pages: 96
Authors: Jim Highsmith

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net