Algorithm Speed

Algorithm Speed

In Estimating, we talked about estimating things such as how long it takes to walk across town, or how long a project will take to finish. However, there is another kind of estimating that Pragmatic Programmers use almost daily: estimating the resources that algorithms use ”time, processor, memory, and so on.

This kind of estimating is often crucial. Given a choice between two ways of doing something, which do you pick? You know how long your program runs with 1,000 records, but how will it scale to 1,000,000? What parts of the code need optimizing?

It turns out that these questions can often be answered using common sense, some analysis, and a way of writing approximations called the "big O" notation.

What Do We Mean by Estimating Algorithms?

Most nontrivial algorithms handle some kind of variable input ”sorting n strings, inverting an m — n matrix, or decrypting a message with an n -bit key. Normally, the size of this input will affect the algorithm: the larger the input, the longer the running time or the more memory used.

If the relationship were always linear (so that the time increased in direct proportion to the value of n ), this section wouldn't be important. However, most significant algorithms are not linear. The good news is that many are sublinear. A binary search, for example, doesn't need to look at every candidate when finding a match. The bad news is that other algorithms are considerably worse than linear; runtimes or memory requirements increase far faster than n. An algorithm that takes a minute to process ten items may take a lifetime to process 100.

We find that whenever we write anything containing loops or recursive calls, we subconsciously check the runtime and memory requirements. This is rarely a formal process, but rather a quick confirmation that what we're doing is sensible in the circumstances. However, we sometimes do find ourselves performing a more detailed analysis. That's when the O () notation comes in useful.

The O() Notation

The O () notation is a mathematical way of dealing with approximations. When we write that a particular sort routine sorts n records in O ( n 2 ) time, we are simply saying that the worst-case time taken will vary as the square of n. Double the number of records, and the time will increase roughly fourfold. Think of the O as meaning on the order of. The O () notation puts an upper bound on the value of the thing we're measuring (time, memory, and so on). If we say a function takes O ( n 2 ) time, then we know that the upper bound of the time it takes will not grow faster than n 2 . Sometimes we come up with fairly complex O () functions, but because the highest-order term will dominate the value as n increases , the convention is to remove all low-order terms, and not to bother showing any constant multiplying factors. O ( n 2 /2 + 3 n ) is the same as O ( n 2 /2), which is equivalent to O ( n 2 ). This is actually a weakness of the O () notation ”one O ( n 2 ) algorithm may be 1,000 times faster than another O ( n 2 ) algorithm, but you won't know it from the notation.

Figure 6.1 shows several common O () notations you'll come across, along with a graph comparing running times of algorithms in each category. Clearly, things quickly start getting out of hand once we get over O ( n 2 ).

Figure 6.1. Runtimes of various algorithms
graphics/06fig01.gif

For example, suppose you've got a routine that takes 1 s to process 100 records. How long will it take to process 1,000? If your code is O (1), then it will still take 1 s. If it's O (lg( n )), then you'll probably be waiting about 3 s. O ( n ) will show a linear increase to 10 s, while an O ( n lg( n )) will take some 33 s. If you're unlucky enough to have an O ( n 2 ) routine, then sit back for 100 s while it does its stuff. And if you're using an exponential algorithm O (2 n ), you might want to make a cup of coffee ”your routine should finish in about 10 263 years . Let us know how the universe ends.

The O () notation doesn't apply just to time; you can use it to represent any other resources used by an algorithm. For example, it is often useful to be able to model memory consumption (see Exercise 35).

Common Sense Estimation

You can estimate the order of many basic algorithms using common sense.

  • Simple loops.   If a simple loop runs from 1 to n, then the algorithm is likely to be O ( n ) time increases linearly with n. Examples include exhaustive searches, finding the maximum value in an array, and generating checksums.

  • Nested loops.   If you nest a loop inside another, then your algorithm becomes O ( m — n ) , where m and n are the two loops' limits. This commonly occurs in simple sorting algorithms, such as bubble sort, where the outer loop scans each element in the array in turn , and the inner loop works out where to place that element in the sorted result. Such sorting algorithms tend to be O ( n 2 ).

  • Binary chop.   If your algorithm halves the set of things it considers each time around the loop, then it is likely to be logarithmic , O (lg( n )) (see Exercise 37). A binary search of a sorted list, traversing a binary tree, and finding the first set bit in a machine word can all be O (lg( n )).

  • Divide and conquer.   Algorithms that partition their input, work on the two halves independently, and then combine the result can be O ( n lg( n )). The classic example is quicksort, which works by partitioning the data into two halves and recursively sorting each. Although technically O ( n 2 ) , because its behavior degrades when it is fed sorted input, the average runtime of quicksort is O ( n lg( n )).

  • Combinatoric.   Whenever algorithms start looking at the permutations of things, their running times may get out of hand. This is because permutations involve factorials (there are 5! = 5 — 4 — 3 — 2 — 1 = 120 permutations of the digits from 1 to 5). Time a combinatoric algorithm for five elements: it will take six times longer to run it for six, and 42 times longer for seven. Examples include algorithms for many of the acknowledged hard problems ”the traveling salesman problem, optimally packing things into a container, partitioning a set of numbers so that each set has the same total, and so on. Often, heuristics are used to reduce the running times of these types of algorithms in particular problem domains.

Algorithm Speed in Practice

It's unlikely that you'll spend much time during your career writing sort routines. The ones in the libraries available to you will probably outperform anything you may write without substantial effort. However, the basic kinds of algorithms we've described earlier pop up time and time again. Whenever you find yourself writing a simple loop, you know that you have an O ( n ) algorithm. If that loop contains an inner loop, then you're looking at O ( m — n ) . You should be asking yourself how large these values can get. If the numbers are bounded, then you'll know how long the code will take to run. If the numbers depend on external factors (such as the number of records in an overnight batch run, or the number of names in a list of people), then you might want to stop and consider the effect that large values may have on your running time or memory consumption.

Tip 45

Estimate the Order of Your Algorithms



There are some approaches you can take to address potential problems. If you have an algorithm that is O ( n 2 ) , try to find a divide and conquer approach that will take you down to O ( n lg( n )).

If you're not sure how long your code will take, or how much memory it will use, try running it, varying the input record count or whatever is likely to impact the runtime. Then plot the results. You should soon get a good idea of the shape of the curve. Is it curving upward, a straight line, or flattening off as the input size increases? Three or four points should give you an idea.

Also consider just what you're doing in the code itself. A simple O ( n 2 ) loop may well perform better that a complex, O ( n lg( n )) one for smaller values of n, particularly if the O ( n lg( n )) algorithm has an expensive inner loop.

In the middle of all this theory, don't forget that there are practical considerations as well. Runtime may look like it increases linearly for small input sets. But feed the code millions of records and suddenly the time degrades as the system starts to thrash. If you test a sort routine with random input keys, you may be surprised the first time it encounters ordered input. Pragmatic Programmers try to cover both the theoretical and practical bases. After all this estimating, the only timing that counts is the speed of your code, running in the production environment, with real data. [2] This leads to our next tip.

[2] In fact, while testing the sort algorithms used as an exercise for this section on a 64MB Pentium, the authors ran out of real memory while running the radix sort with more than seven million numbers. The sort started using swap space, and times degraded dramatically.

Tip 46

Test Your Estimates



If it's tricky getting accurate timings, use code profilers to count the number of times the different steps in your algorithm get executed, and plot these figures against the size of the input.

Best Isn't Always Best

You also need to be pragmatic about choosing appropriate algorithms ”the fastest one is not always the best for the job. Given a small input set, a straightforward insertion sort will perform just as well as a quicksort, and will take you less time to write and debug. You also need to be careful if the algorithm you choose has a high setup cost. For small input sets, this setup may dwarf the running time and make the algorithm inappropriate.

Also be wary of premature optimization. It's always a good idea to make sure an algorithm really is a bottleneck before investing your precious time trying to improve it.

Related sections include:
  • Estimating

Challenges
  • Every developer should have a feel for how algorithms are designed and analyzed . Robert Sedgewick has written a series of accessible books on the subject ([Sed83, SF96, Sed92] and others). We recommend adding one of his books to your collection, and making a point of reading it.

  • For those who like more detail than Sedgewick provides, read Donald Knuth's definitive Art of Computer Programming books, which analyze a wide range of algorithms [Knu97a, Knu97b, Knu98].

  • In Exercise 34, we look at sorting arrays of long integers. What is the impact if the keys are more complex, and the overhead of key comparison is high? Does the key structure affect the efficiency of the sort algorithms, or is the fastest sort always fastest?

Exercises

34.

We have coded a set of simple sort routines, which can be downloaded from our Web site (http://www.pragmaticprogrammer.com). Run them on various machines available to you. Do your figures follow the expected curves? What can you deduce about the relative speeds of your machines? What are the effects of various compiler optimization settings? Is the radix sort indeed linear?

35.

The routine below prints out the contents of a binary tree. Assuming the tree is balanced, roughly how much stack space will the routine use while printing a tree of 1,000,000 elements? (Assume that subroutine calls impose no significant stack overhead.)

  void  printTree(  const  Node *node) {  char  buffer[1000];  if  (node) {         printTree(node->left)  ;  getNodeAsString(node, buffer);         puts(buffer);         printTree(node->right);       }     } 
36.

Can you see any way to reduce the stack requirements of the routine in Exercise 35 (apart from reducing the size of the buffer)?

37.

we claimed that a binary chop is O (lg(n)). Can you prove this?



The Pragmatic Programmer(c) From Journeyman to Master
The Pragmatic Programmer: From Journeyman to Master
ISBN: 020161622X
EAN: 2147483647
Year: 2005
Pages: 81

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net