This chapter is dedicated to Andrew Marasco (1916-2004), who just missed seeing this book in print. There's nothing so tragic as an elegant theory assaulted by a brutal gang of facts. My friend Bob Marshall recently sent me an article from Invention and Technology entitled "How America Chose Not to Beat Sputnik into Space." In this article[10] T. A. Heppenheimer asserts, "We could have launched an Earth-orbiting satellite more than a year before the Soviets, but we intentionally held back. And by handing them a propaganda triumph, we ensured their ultimate defeat in the Cold War." I'm just wondering if part of the defeat had to do with mobilizing a generation of scientists and engineers, and if this wasn't one of those "unintended consequences" which this time worked in the right direction.
What does all this background into engineering and computation have to do with software development and its management? Well, it has to do with an approach to problem-solving. We were taught to make estimates, then calculate, then compare. What happens when the results of your calculation turn out to be very far off your original estimate? There are three scenarios you need to examine, in the following order:
Now this approach turns out to be quite general, and as such there are applications to software. For example, I have frequently found that debugging a bizarre result in a program in this order leads to enlightenment. Before questioning fundamental assumptions, check to see that you did the math rightin the case of a program, did we use the right data at the right time? Then check your modelin this case, the algorithm. If things still don't make sense, you now need to examine if something is going on that you really didn't understand. This may lead to discovering that the basic programming approach was faulty to begin with. More fundamentally, our computational grounding led us to question almost everything. As pointed out previously, we were the original "defensive programmers." Later on, as managers we tended to be quite hard on ourselvesand our teamsin terms of using "solid engineering." That meant putting a lot of emphasis on basic architecture, on useful things like design reviews, and on testing our systems early and often. We were sticklers for detail. We remembered that just because "things almost balance" doesn't mean the answer is right. Sometimes in draft financial statements you find that totals are off by only a little bit, and are tempted to blame "round-off error" as the culprit. Too often you find, on closer examination, that two rather large mistakes in opposite directions have coincidentally almost cancelled each other out. When the gods of computation smile on us, they leave us this clue. Don't be too lazy to follow up, remembering at all times Richard Hamming's Golden Rule: "The purpose of computing is insight, not numbers."[11]
Lastly, our engineering background taught us about complexity and scaling. I will always remember my senior engineering thesis project. A team of us had to design an entire petrochemical refinery from scratch. We had a few weeks to do it, working day and night. Here's what we learned: It is hard to coordinate multiple engineers simultaneously working on different parts of the project, because one person's output from his stage of the plant is another fellow's input. Everything changes at once! In order to get around this problem, we had to cleverly design interfaces that we could stabilize. Many years later, it turned out that this notion proved to be invaluable in the construction of large, complex software systems. Other concepts of "building big things" were similarly usefully borrowed and transplanted into software development soil. |