Section 11.3. Efficiency and the O-Notation


11.3. Efficiency and the O-Notation

In the last section, we talked about different implementations being "good" for different operations. A good algorithm is economical in its use of two resources: time and space. Implementations of collections usually use space proportional to the size of the collection, but they can vary greatly in the time required for access and update, so that will be our primary concern. It's very hard to say precisely how quickly a program will execute, as that depends on many factors, including some that are outside the province of the programmer, such as the quality of the compiled code and the speed of the hardware. Even if we ignore these and limit ourselves to thinking only about how the execution time for an algorithm depends on its data, detailed analysis can be complex. A relatively simple example is provided in Donald Knuth's classic book Sorting and Searching (Addison-Wesley), where the worst-case execution time for a multiple list insertion sort program on Knuth's notional MIX machine is derived as

 [3.5N2 + 24.5N + 4M + 2] 

where N is the number of elements being sorted and M is the number of lists.

As a shorthand way of describing algorithm efficiency, this isn't very convenient. Clearly we need a broader brush for general use. The one most commonly used is the O-notation (pronounced "big-oh notation"). The O-notation is a way of describing the performance of an algorithm in an abstract way, without the detail required to predict the precise performance of a particular program running on a particular machine. Our main reason for using it is that it gives us a way of describing how the execution time for an algorithm depends on the size of its data set, provided the data set is large enough. For example, in the previous expression the first two terms are comparable for low values of N; in fact, for N < 8, the second term is larger. But as N grows, the first term increasingly dominates the expression and, by the time it reaches 100, the first term is 15 times as large as the second one. Using a very broad brush, we say that the worst case for this algorithm takes time O(N2). We don't care too much about the coefficient because that doesn't make any difference to the single most important question we want to ask about any algorithm: what happens to the running time when the data size increasessay, when it doubles? For the worst-case insertion sort, the answer is that the running time goes up fourfold. That makes O(N2) pretty badworse than any we will meet in practical use in this book.

Table 11.1 shows some commonly found running times, together with examples of algorithms to which they apply. For example, many other running times are possible, including some that are much worse than those in the Figure. Many important problems can be solved only by algorithms that take O(2N)for these, when N doubles, the running time is squared! For all but the smallest data sets, such algorithms are infeasibly slow.

Table 11.1. Some common running times

Time

Common name

Effect on the running time if N is doubled

Example algorithms

O(1)

Constant

Unchanged

Insertion into a hash table (Section 13.1)

O(log N)

Logarithmic

Increased by a constant amount

Insertion into a tree (Section 13.2.1)

O(N)

Linear

Doubled

Linear search

O(N log N)

 

Doubled plus an amount proportional to N

Merge sort (Section 17.1.1)

O(N2)

Quadratic

Increased fourfold

Bubble sort


Sometimes we have to think about situations in which the cost of an operation varies with the state of the data structure. For example, adding an element to the end of an ArrayList can normally be done in constant time, unless the ArrayList has reached its capacity. In that case, a new and larger array must be allocated, and the contents of the old array transferred into it. The cost of this operation is linear in the number of elements in the array, but it happens relatively rarely. In situations like this, we calculate the amortized cost of the operationthat is, the total cost of performing it n times divided by n, taken to the limit as n becomes arbitrarily large. In the case of adding an element to an ArrayList, the total cost for N elements is O(N), so the amortized cost is O(1).




Java Generics and Collections
Java Generics and Collections
ISBN: 0596527756
EAN: 2147483647
Year: 2006
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net