The other phases of software development were amenable to the structured and object-oriented approaches. Why wasn’t the same true of debugging?
There are at least three reasons.
There is a big difference between constructive versus cognitive activities.
Debugging is widely confused with testing.
The supporting disciplines needed for structured and object-oriented debugging developed much later.
Coding, designing, analyzing, and testing are all constructive activities. They each produce a tangible result. Coding produces source code. Designing produces design documents. Analysis produces a variety of documents, depending on the methodology used. Testing produces test cases and reports on the success or failure of tests.
In contrast, debugging is primarily a cognitive activity. The end result is knowledge of why there is a problem and what must be done to correct it. There will be a source change, but it may only involve adding or deleting a single character or word. The constructive output of debugging is often disproportionate to the effort expended.
Most books and papers written in the 1960s and 1970s confuse debugging with testing. When they talk about debugging, they’re really discussing testing.
One of the earliest books to properly define testing was Glenford Myers’s [My79] The Art of Software Testing:
Testing is the process of executing a program with the intent of finding errors.
In his book, Myers was concerned with debunking definitions such as the following: “Testing is the process of demonstrating that errors are not present.”
This definition, and other similar ones, all define testing negatively, in terms of the absence of errors. It’s impossible to prove the absence of anything in general. In the particular case of software, it’s even more pointless because large useful software systems all contain defects.
Myers defines debugging as follows:
Debugging is a two-part process; it begins with some indication of the existence of an error … and it is the activity of (1) determining the exact nature and location of the suspected error within the program and (2) fixing or repairing the error.
Following Myers’s lead, we further refine the definition of software testing to contrast it with software debugging.
Testing is the process of determining whether a given set of inputs causes an unacceptable behavior in a program.
Debugging is the process of determining why a given set of inputs causes an unacceptable behavior in a program and what must be changed to cause the behavior to be acceptable.
The lack of a clear definition of the goals of the testing and debugging processes prevented the development of a discipline of structured or object-oriented debugging.
The cognitive psychology of human error, induction, and deduction did not become widely understood until the early 1990s. The following are some key texts in each of these areas:
Human Error, James Reason, 1990.
Induction: Processes of Inference, Learning, and Discovery, J. H. Holland et al., 1989.
Deduction, P. N. Johnson-Laird and R. M. J. Byrne, 1989.
Books that documented the mathematical problem-solving process did not become widely available until the mid-1980s. The key text in this area is
Mathematical Problem Solving, Alan H. Schoenfeld, 1985.
These disciplines provide important insights into the debugging process. It wasn’t possible to develop a general theory of debugging prior to the publication of research in these areas.
If you’re an experienced programmer, some of what follows will seem like common sense to you. If you’re a novice, it may seem like great words of wisdom. The odd thing about common sense is that it’s so uncommon. If you have professional programming experience, we hope that we will state some principles that you’re already following.
Most people who read this book will already have had at least some success in computer programming, if only in passing several computer science courses. Unfortunately, plenty of people can’t pass an introductory programming course. Inability to debug programs is a major reason that people don’t pass such courses.
The inability to learn to debug software can be, and often is, simply due to a lack of aptitude for this type of work. In some cases, however, the difficulty students have in learning to debug is due to the lack of adequate instruction in how to do this task and the lack of any kind of systematic approach to debugging. This book can make the difference for some students who have some aptitude but aren’t receiving the instruction they need to succeed.
Where did the title for this book come from? Consider some of the other methods that people use for debugging.
Debugging by editing: This is often the first approach to debugging that programming students try. If their first effort doesn’t work, they make some changes and try executing the program again.
The student will probably achieve some success with this approach. Introductory programming courses usually assign trivial programs, and the number of possible changes to a partially correct program is relatively small.
Debugging by interacting: This is usually the next approach that students learn. The programming student will inevitably be told by an instructor or a fellow student that he or she should try using a debugger. This is reasonable, since interactive debuggers that work with high-level languages are almost universally available.
This is an improvement over the previous phase in at least one respect. The programming student will use the debugger to observe some aspect of the program’s behavior before making a change.
Debugging by repeating: As the programming student uses an editor and a debugger, he or she will inevitably hit upon certain sequences of actions that prove useful. Without fully understanding the assumptions or limitations of a particular method, the programmer will apply the action set to every problem. Sometimes the actions will prove useful, but at other times they won’t help. Given a sufficient number of trials, this approach will winnow down a large number of actions to a “bag of tricks” of manageable size.
This method of developing debugging skills is nearly universal. It’s probably the reason for magazine articles and books with titles like “The Black Art of Debugging.” The programmer can’t explain scientifically the assumptions, limitations, or relationships between the techniques he or she has learned to use, so they seem similar to a set of sorcerer’s incantations.
There are problems with this approach. It takes longer than necessary to produce a highly developed set of debugging skills. The time used by the trial-and-error evaluation of methods is determined by the random order in which the programmer encounters bugs. In addition, programmers often waste time when faced with debugging problems for which nothing in their bag of tricks is helpful.
Debugging by Thinking as a methodology has the following characteristics that distinguish it from the other ways of debugging described above:
Explicit methodology: It’s much easier to teach a skill to people when you make the steps of the methodology explicit. The alternative is to expose them to lots of examples and hope that they will discern the steps by induction. Debugging by thinking means using techniques whose steps are explicitly described.
Multidisciplinary approach: It’s much easier to understand debugging as primarily a cognitive process, rather than as a constructive process, like the other phases of the software life cycle. Once we recognize this, we can leverage the insights that people who work in other problem-solving disciplines have developed. Debugging by thinking means actively seeking methods from intellectual disciplines that solve analogous problems.
Self-awareness: It’s much easier to find problems in software if you know the kinds of problems that are possible and the ways in which people make mistakes. Debugging by thinking means understanding our assumptions, our methodologies, and our tendencies to make mistakes.