Symptoms

 < Day Day Up > 



Software development is a modern industry filled with complex problems that require complex solutions. Because complex problems and their corresponding complex solutions do exist, when do we consider a solution too complex for the problem? Let us look at some of the outward signs that might indicate an overly complex solution.

Poor Readability

One of the results of overly complex code is code that is difficult to read. This is only an initial indicator and the reason for the poor readability must be investigated more fully, because poor readability can also be caused by other illnesses or simply poor coding. On the other side of the coin, some readable code is still too complex, but this is less common. If you write overly complex code, you can expect that a majority of the time it will be less readable.

 CD-ROM  For a good example of how increased complexity decreases the readability of the code, we return to two algorithms first presented in Chapter 1, “Premature Optimization.” They can also be found on the companion CD-ROM in Source/Examples/Chapter1/readability.cpp. First, the simple selection sort algorithm:

void selection_sort(int *io_array,                         unsigned int i_size)     {         for(unsigned int l_indexToSwap = 0;             l_indexToSwap < i_size; ++l_indexToSwap) {             unsigned int l_indexOfMinimumValue =                 l_indexToSwap;             for(unsigned int l_indexToTest =                 l_indexToSwap + 1;                 l_indexToTest < i_size;                 ++l_indexToTest) {                 if(io_array[l_indexToTest] <                    io_array[l_indexOfMinimumValue]) {                     l_indexOfMinimumValue =                         l_indexToTest;                 }             }             int l_minimumValue =                 io_array[l_indexOfMinimumValue];             io_array[l_indexOfMinimumValue] =                 io_array[l_indexToSwap];             io_array[l_indexToSwap] = l_minimumValue;         }     }

Now compare this to the heap sort algorithm, which is faster but also more complex:

    void sift_down(int *io_array, unsigned int i_size,                    int i_value, unsigned int i_index1,                    unsigned int i_index2)     {         while(i_index2 <= i_size - 1) {             if((i_index2 < i_size - 1) &&                (io_array[i_index2] <                 io_array[i_index2 + 1])) {                 ++i_index2;             }             if(i_value < io_array[i_index2]) {                 io_array[i_index1] = io_array[i_index2];                 i_index1 = i_index2;                 i_index2 *= 2;             } else {                 break;             }         }         io_array[i_index1] = i_value;     }     void heap_sort(int *io_array, unsigned int i_size)     {         if(i_size < 2) {             return;         }         for(unsigned int l_hire = i_size / 2;             l_hire > 0; --l_hire) {             sift_down(io_array, i_size,                       io_array[l_hire - 1],                       l_hire - 1,                       (2 * l_hire) - 1);         }         for(unsigned int l_retire = i_size - 1;             l_retire > 1; --l_retire) {             int l_value = io_array[l_retire];             io_array[l_retire] = io_array[0];             sift_down(io_array, l_retire,                       l_value, 0, 1);         }         int l_swap = io_array[1];         io_array[1] = io_array[0];         io_array[0] = l_swap;     } 

Not only is the second algorithm longer, but the flow is more difficult to follow. Even though the heap sort algorithm is only a step or two above selection sort, it is already harder to read. Algorithms that are even more complex become even less readable.

To decide if code that is difficult to read is a result of Complexification, the purpose of the code must first be determined. This illustrates an important reason for both writing minimal but complete code and for properly documenting code that must be more complex. Without these indicators, programmers unfamiliar with the code will be forced to manually decipher the purpose of the code. This is particularly time wasting if the code requires no changes.

Part of the process of the understanding the code’s purpose is to evaluate the contexts in which the code is used. This knowledge will often show if the complexity of the code is caused by imposing unnecessary requirements. This can be a primary indicator of overly complex code, and once the realization is made that these requirements can be relaxed, it becomes obvious that the code can also be simplified.

Suppose you discover that the heap sort algorithm is being used for a collection of items that only need to be sorted when the application is first started. The primary motivation for the extra complexity in this case is the increase in performance, but it is unlikely that the increased speed is necessary in this case. Therefore, the algorithm used is more complex than necessary.

Once a complete understanding of the code and its requirements is reached, the other symptoms can be investigated to see if the requirements are overly strict. While the following symptoms are common, there are other possible reasons for the requirements to be overly strict and the code overly complex. For example, Premature Optimization represents a major cause of writing overly complex code. Symptoms will often overlap as well, such as optimizations done because of Premature Optimization having so little performance impact that the result is invisible to the end user. If the complexity is caused by one of the more obvious symptoms, the diagnosis of Complexification is easy. However, if it at first appears to fall outside the symptoms it might be a good idea to take a closer look and make sure there are valid reasons for the code to be complex.

Invisible Functionality

The end goal of any software application is to provide the users with some specific functionality, whether it is balancing a checkbook or entertaining them with a challenging game. In order to sell more products, the user must be satisfied with the functionality provided by the application. This satisfaction comes from the perceived experience of using the product and the completion of any goals the user wants to accomplish.

The use of the word perceived is important, because it indicates the importance of the result over the internal implementation used to achieve that result. This means that any internal functionality that is not perceived by the user in any substantial way is a poor use of development resources. The most common aspects of an application that can be overlooked by users are minor performance enhancements and certain elements of the visual presentation. In particular, the efforts wasted and problems caused by imperceptible performance improvements are substantial enough to merit their own major illness. See Chapter 1 for a more in-depth discussion of this problem.

start sidebar
Invisible Asteroids

Imperceptible elements of visual presentation are most often seen in games and other visual simulations where accuracy is not the end goal. It is relatively easy to add computations for visual elements that the end user cannot perceive. To provide a better understanding of what this means, let us look at a project in which a large number of asteroids were being simulated and displayed. When the time came for optimization at the end of the project, it was discovered that a large amount of time was being spent with a function for handling the physical simulation of each asteroid chunk. As it turned out, each asteroid chunk was being individually simulated to take into account such details as momentum and energy conservation. However, looking at the asteroid belt during execution it was obvious that the movement of the individual asteroid chunks was barely noticeable. Both the work and loss of performance could have been avoided by using a simpler solution such as a series of random animations. At this point in the project, it was too risky to implement a completely new solution. Not all was lost, however, as it was possible to remove some of the processing required by this complex solution. Because the asteroids chunks had no effect other than visual presentation, it was possible to only simulate their movement when they were being displayed. While this did recover some of the performance cost, there was no way to recover the lost development time.

end sidebar

Return of the Nifty Algorithm

When we talked about Premature Optimization, we mentioned that one of the causes was the unnecessary use of algorithms that were just recently learned. There is always a temptation to use a recently learned algorithm, as it is fresh in our minds. This is particularly true when we read about the algorithm as written by the algorithm’s inventor, who will espouse the superior properties of their algorithm. Although we first mentioned it when talking about Premature Optimization, this behavior extends beyond that scope and is often a symptom of Complexification.

There are several reasons to be wary of this temptation. Whatever the reason for trying the new algorithm, be it Premature Optimization or simply curiosity, the result is less readable and less maintainable code. This tradeoff would be acceptable if the algorithm fulfilled requirements that no simpler algorithm could, but this is often not the case. The algorithm’s merits must be compared with its complexity to determine if it is overly complex for the situation at hand.

start sidebar
All Coded Up with Nowhere to Go

Choosing an overly complex algorithm usually causes problems such as poor performance or maintenance difficulties, but in some cases, the results are even more disastrous. Just such a disaster happened and almost brought one project to its knees. The game programmer for the artificial intelligence (AI) chose a fuzzy logic system to use for controlling the non-player units. This complex AI system was supposed to adapt to the player and offer an appropriate challenge level for all players.

Instead, with only a few months left before the ship date, not a single non-player unit was moving. Plenty of code had been written for the complex AI module, but it was all doing nothing useful. There was no hope that the fuzzy logic system would be done in time. However, defaulting to a much simpler system allowed the product to finish in time and still provided a formidable opponent for the player.

end sidebar

This leads us to another point that must be carefully considered. Does the algorithm provide the benefits that are implied by the algorithm’s inventor? It is common for an algorithm to be made to sound much more useful than it is, particularly if the inventor spent a considerable amount of time researching it. Read carefully the description of the algorithm and sort out the verifiable claims as opposed to conjecture and vague praises. Once the benefits have been narrowed down and compared to similar algorithms for the given situation, the algorithm can be adopted if it provides a clear advantage. Even then, it is important to test the result of using the algorithm to verify that the expected benefits are in fact achieved.

Emergent Bugs

Excessive complexity can also arise in the interactions between code units. The most common causes of this are Premature Optimization or the degradation of code due to changing requirements. Whatever causes it, this complexity can give rise to bugs that are difficult to reproduce and therefore difficult to find.

The reason for the difficulty in reproducing and identifying these problems lies with the concept of emergent behavior. Even a set of extremely simple systems can give rise to a complex set of interactions, as shown by Reynolds’ boids [Reynolds87]. Reynolds simulated the complex behavior of a flock of birds, named boids in the simulation, by using a small set of simple rules to guide the behavior of each individual boid. When a group of boids was placed together, a complex flight pattern emerged similar to those seen in real flocks of birds. Although creating some form of emergent behavior is not very difficult, achieving desired behavior from an emergent system is much more difficult.

Thus, the more complex the interactions between the various code units become, the more likely that unwanted behavior will emerge in the form of bugs. Therefore, if a bug appears that is difficult to reproduce, one area to investigate is the complexity of the interactions around the area where the bug occurred. This can lead to finding a reproducible case or at the very least show that the interactions need refactoring regardless of whether they are the cause of this particular bug.



 < Day Day Up > 



Preventative Programming Techniques. Avoid and Correct Common Mistakes
Preventative Programming Techniques: Avoid and Correct Common Mistakes (Charles River Media Programming)
ISBN: 1584502576
EAN: 2147483647
Year: 2002
Pages: 121
Authors: Brian Hawkins

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net