# Section 7.5. Amortized Analysis

[Page 199 ( continued )]

### 7.5. Amortized Analysis

A fourth kind of analysis, somewhere between average and worst case, is amortized analysis. The subtle differences between these three are in the questions they answer.

Average-case analysis answers the question, "How much time does this algorithm take on a typical run ?"

Worst-case analysis answers the question, "How much time does this algorithm take on the worst possible run ?"

[Page 200]

Amortized analysis answers the question, "If this algorithm is run several times, what is the average time per run , given the worst possible sequence of runs?"

Unlike average-case analysis, amortized analysis does not have to make any assumptions about what a "typical" run looks like. Often, the amortized running time is the same as the worst-case running time, because the worst possible sequence of runs is just the worst possible individual run, over and over again. For some algorithms, though, it is not possible for the worst run to occur many times in a row.

As an example, consider the add() method from our ArrayList class, which is reproduced in Figure 7-19. This method takes constant time in the best case, but linear time in the worst case.

##### Figure 7-19. The add() method from our ArrayList class takes constant time unless the ArrayList is full. In that worst case, it takes time linear in the size of the ArrayList.
 ``` 1 public void add (E target) { // 1 step, once 2 if (isFull()) { // 1 step, once 3 stretch(); // n steps, 0 or 1 times 4 } 5 data[size] = target; // 1 step, once 6 size++; // 1 step once 7 } ```

To find the amortized time, we imagine the worst possible sequence of runs. This occurs when we start with a new, empty ArrayList (capacity 1) and invoke add() n times. What is the total time for all of these invocations? The time for operations other than stretching adds up to something in Q ( n ). How much time do we spend stretching the ArrayList?

We can find the pattern by writing down a few examples (Figure 7-20). If we start at capacity 1 and double every time the ArrayList fills up, we need to stretch the ArrayList every time its size exceeds a power of 2.

##### Figure 7-20. Amount of time spent copying over a sequence of invocations of add() . For simplicity, we assume that n is one more than a power of 2, a move we will justify in Chapter 8.
 Invocation 1 2 3 4 5 6 7 8 9 ... n Time 1 2 4 8 n 1

The total time for this sequence of runs is therefore:

[Page 201]

It's not immediately clear what this adds up to, or even how many terms it has. As luck would have it, we don't need to know how many terms there are. If we simply assume there are t terms, numbered 0 through t 1, we find that this sum is:

The worst possible sequence of add() invocations requires that we copy less than 2( n 1) elements. The amortized time per operation is:

Since we need W (1) time per operation just to add the new element, we conclude that the amortized running time of add() is in Q (1). Since the average-case time is always at least as good as the amortized time, it follows that add() takes constant time on average.

Now suppose we rewrite the stretch() method so that, instead of doubling the capacity of the ArrayList, it merely increases the capacity by 1. This seems like a reasonable ideawhy allocate a bunch of extra memory that we might never use?

What effect does this change have on the amortized running time of add() ? Again, we consider when and how much we have to copy (Figure 7-21).

##### Figure 7-21. Amount of time spent copying over a sequence of invocations of add() , when stretch() is modified to increase the ArrayList's capacity by only 1.
 Invocation 1 2 3 4 5 6 7 8 9 ... n Time 1 2 3 4 5 6 7 8 n 1