There is an interesting episode in one of the earlier Tintin cartoon novels where the eponymous hero travels through Russia in a car that eventually breaks down. After indulging in a frenzy of debugging by ripping out pieces of the engine and throwing them on the ground, Tintin finds that the actual problem is a flat tire. With the entire engine lying in pieces on the ground, Tintin randomly throws the pieces back into the engine bay until it's completely full. Left with a full engine bay, but still several engine sections lying on the ground, Tintin shrugs to himself and of course drives off without a problem.
As a freelance consultant who moves from corporation to corporation, I've lost count of the number of times that I've seen this scenario played out in the real world by corporate developers. For example, I was recently having a major problem when trying to use a component written by a colleague who sat next to me in the office. When I told him about the problem, I watched in amazement as he guessed the cause of the underlying bug, made a fix, recompiled the component and, without testing his fix, handed the component back to me for testing. Of course, it still didn't work. So he made another seemingly random fix and handed the component back to me, once again without doing any testing. Of course, the new fix didn't work either. This "Tintin" cycle of fix-and-failure repeated itself four or five times before my colleague eventually stopped guessing and started using a debugger to figure out what the problem actually was.
My colleague wasn't stupid; indeed, he was probably an above-average developer in a team that had several good coders. On the many occasions that I've seen this "random debugging" behavior, it rarely seems to be correlated with lack of intelligence or experience. So why is this behavior so common amongst developers?
My theory, after a couple of decades' worth of observations, is that many developers delude themselves about their own skill levels and their understanding of their own code. Most developers genuinely believe that they're well above average in debugging ability, and that their experience and brainpower allows them to make educated guesses during a debugging session.
The problem is that software development is mainly a solitary experience, which makes it very difficult to estimate exactly how good you are at understanding and debugging code, even your own code. Without the ability to compare your debugging abilities with those of other developers, it's therefore hard to judge your own skill level. It's also hard for a developer to criticize his own debugging abilities when he's devoted months and years to writing code for a living, and when his compensation is probably directly related to his ability to appear as a software expert.
One insight came from the world of professional chess, which I used to inhabit before I turned to writing software. Playing chess is somewhat similar to writing software. Both of these activities are mentally very intensive , both rely on a variety of mental skills, and both involve grandiose plans and difficult implementations . But there is one big difference between the two activities, which is that you always know exactly how good you are at playing chess.
Professional chess is a rather brutal sport because you're constantly matching yourself against other chess players in tournament games that are graded by computer. All of your chess knowledge, memory, judgment, and calculations are reduced to a single number called an ELO grade, and there's no shelter from this brutal truth. If you want to know how good or bad somebody is at chess, you simply have to look at his or her ELO grade.
Faced with this implacable reduction of chess skills to a single number, professional chess players tend to become very good at preventing mistakes. Either they learn not to deceive themselves or they simply fade away and don't make the grade. Developers don't often have to undergo this harsh judgment. Although a compiler will tell you when you've made a syntax mistake, your logic and semantic bugs usually aren't exposed until a later stage. Software developers are rarely beaten over the head with the results of their mistakes.
One way of simulating the feedback received by professional chess players is to make a bet with yourself every time that you fix a bug. If your bug fix fails, either immediately or at some time in the future, pay $10 (or some other nasty amount) into a "bug fund." Then see how much money the bug fund collects each week and whether the weekly amount paid into the fund goes up or down over time. What you do with the money after a year or so is up to you. Maybe give it to charity, or alternatively buy a good book on debugging.