5.1 Recursive Algorithms

   

A recursive algorithm is one that solves a problem by solving one or more smaller instances of the same problem. To implement recursive algorithms in Java, we use recursive methods a recursive method is one that calls itself. Recursive methods in Java correspond to recursive definitions of mathematical functions. We begin our study of recursion by examining programs that directly evaluate mathematical functions. The basic mechanisms extend to provide a general-purpose programming paradigm, as we shall see.

Program 5.1 Factorial function (recursive implementation)

This recursive method computes the function N!, using the standard recursive definition. It returns the correct value when called with N nonnegative and sufficiently small that N! can be represented as an int.

 static int factorial(int N)    {      if (N == 0) return 1;      return N*factorial(N-1);    } 

Recurrence relations (see Section 2.5) are recursively defined functions. A recurrence relation defines a function whose domain is the nonnegative integers either by some initial values or (recursively) in terms of its own values on smaller integers. Perhaps the most familiar such function is the factorial function, which is defined by the recurrence relation

graphics/05icon01.gif


This definition corresponds directly to the recursive Java method in Program 5.1.

Program 5.1 is equivalent to a simple loop. For example, the following for loop performs the same computation:

 for ( t = 1, i = 1; i <= N; i++) t *= i; 

As we shall see, it is always possible to transform a recursive program into a nonrecursive one that performs the same computation. Conversely, we can also use recursion to express without loops any computation that involves loops.

We use recursion because it often allows us to express complex algorithms in a compact form, without sacrificing efficiency. For example, the recursive implementation of the factorial function obviates the need for local variables. The cost of the recursive implementation is borne by the mechanisms in the programming systems that support method invocations, which use the equivalent of a built-in pushdown stack. Most modern programming systems have carefully engineered mechanisms for this task. Despite this advantage, as we shall see, it is all too easy to write a simple recursive method that is extremely inefficient, and we need to exercise care to avoid being burdened with intractable implementations.

Program 5.2 A questionable recursive program

If the argument N is odd, this method calls itself with 3N +1as an argument; if N is even, it calls itself with N/2 as an argument. We cannot use induction to prove that this program terminates, because not every recursive call uses an argument smaller than the one given.

 static int puzzle(int N)    {      if (N == 1) return 1;      if (N % 2 == 0)           return puzzle(N/2);      else return puzzle(3*N+1);    } 

Program 5.1 illustrates the basic features of a recursive program: it calls itself (with a smaller value of its argument), and it has a termination condition in which it directly computes its result. We can use mathematical induction to convince ourselves that the program works as intended:

  • It computes 0! (basis).

  • Under the assumption that it computes k! for k < N (inductive hypothesis), it computes N!.

Reasoning like this can provide us with a quick path to developing algorithms that solve complex problems, as we shall see.

In a programming language such as Java, there are few restrictions on the kinds of programs that we write, but we strive to limit ourselves in our use of recursive methods to those that embody inductive proofs of correctness like the one outlined in the previous paragraph. Although we do not consider formal correctness proofs in this book, we are interested in putting together complicated programs for difficult tasks, and we need to have some assurance that the tasks will be completed properly. Mechanisms such as recursive methods can provide such assurances while giving us compact implementations. Practically speaking, the connection to mathematical induction tells us that we should ensure that all of our recursive methods satisfy two basic properties:

Program 5.3 Euclid's algorithm

One of the oldest known algorithms, dating back over 2000 years, is this recursive method for finding the greatest common divisors of two integers.

 static int gcd(int M, int N)    {      if (N == 0) return M;      return gcd(N, M % N);    } 
  • They must explicitly solve a basis case.

  • Each recursive call must involve smaller values of the arguments.

These points are vague they amount to saying that we should have a valid inductive proof for each recursive method that we write. Still, they provide useful guidance as we develop implementations.

Program 5.2 is an example that illustrates the need for an inductive argument. It is a recursive method that violates the rule that each recursive call must involve smaller values of the arguments, so we cannot use mathematical induction to understand it. Indeed, it is not known whether or not this computation terminates for every N if there are no bounds on the size of N. For small integers that can be represented as ints, we can check that the program terminates (see Figure 5.1 and Exercise 5.4), but for large integers (64-bit words, say), we do not know whether or not this program goes into an infinite loop.

Figure 5.1. Example of a recursive method invocation chain

This nested sequence of method invocations eventually terminates, but we cannot prove that the recursive method in Program 5.2 does not have arbitrarily deep nesting for some argument. We prefer recursive programs that always invoke themselves with smaller arguments.

graphics/05fig01.gif

Program 5.3 is a compact implementation of Euclid's algorithm for finding the greatest common divisor of two integers. It is based on the observation that the greatest common divisor of two integers x and y with x > y is the same as the greatest common divisor of y and x mod y (the remainder when x is divided by y). A number t divides both x and y if and only if t divides both y and x mod y, because x is equal to x mod y plus a multiple of y. The recursive calls made for an example invocation of this program are shown in Figure 5.2. For Euclid's algorithm, the depth of the recursion depends on arithmetic properties of the arguments (it is known to be logarithmic).

Figure 5.2. Example of Euclid's algorithm

This nested sequence of method invocations illustrates the operation of Euclid's algorithm in discovering that 314159 and 271828 are relatively prime.

graphics/05fig02.gif

Program 5.4 is an example with multiple recursive calls. It is another expression evaluator that performs essentially the same compu-tations as Program 4.5 but on prefix (rather than postfix) expressions and lets recursion take the place of the explicit pushdown stack. In this chapter, we shall see many other examples of recursive programs and equivalent programs that use pushdown stacks. We shall examine the specific relationship between several pairs of such programs in detail.

Program 5.4 Recursive program to evaluate prefix expressions

To evaluate a prefix expression, we either convert a number from ASCII to binary (in the while loop at the end) or perform the operation indicated by the first character in the expression on the two operands, evaluated recursively. This method is recursive, but it uses a global array containing the expression and an index to the current character in the expression. The index is advanced past each subexpression evaluated.

 static char[] a;  static int i;  static int eval()    { int x = 0;      while (a[i] == ' ') i++;      if (a[i] == '+')        { i++; return eval() + eval(); }      if (a[i] == '*')        { i++; return eval() * eval(); }      while ((a[i] >= '0') && (a[i] <= '9'))        x = 10*x + (a[i++]-'0');      return x;    } 

Figure 5.3 shows the operation of Program 5.4 on a sample prefix expression. The multiple recursive calls mask a complex series of computations. Like most recursive programs, this program is best understood inductively: Assuming that it works properly for simple expressions, we can convince ourselves that it works properly for complex ones. This program is a simple example of a recursive descent parser we can use the same process to convert Java programs into machine code.

Figure 5.3. Prefix expression evaluation example

This nested sequence of method invocations illustrates the operation of the recursive prefix-expression evaluation algorithm on a sample expression. For simplicity, the expression arguments are shown here. The algorithm itself never explicitly decides the extent of its argument string: rather, it takes what it needs from the front of the string.

graphics/05fig03.gif

A precise inductive proof that Program 5.4 evaluates the expression properly is certainly much more challenging to write than are the proofs for methods with integer arguments that we have been dis cussing, and we shall encounter recursive programs and data structures that are even more complicated than this one throughout this book. Accordingly, we do not pursue the idealistic goal of providing complete inductive proofs of correctness for every recursive program that we write. In this case, the ability of the program to "know" how to separate the operands corresponding to a given operator seems mysterious at first (perhaps because we cannot immediately see how to do this separation at the top level), but it is actually a straightforward calculation (because the path to pursue at each method invocation is unambiguously determined by the first character in the expression).

In principle, we can replace any for loop by an equivalent recursive program. Often, the recursive program is a more natural way to express the computation than the for loop, so we may as well take advantage of the mechanism provided by the programming system that supports recursion. There is one hidden cost, however, that we need to bear in mind. As is plain from the examples that we examined in Figures 5.1 through 5.3, when we execute a recursive program, we are nesting method invocations until we reach a point where we do not do a recursive invocation, and we return instead. In most programming environments, such nested method invocations are implemented using the equivalent of built-in pushdown stacks. We shall examine the nature of such implementations throughout this chapter. The depth of the recursion is the maximum degree of nesting of the method invocations over the course of the computation. Generally, the depth will depend on the input. For example, the depths of the recursions for the examples depicted in Figures 5.2 and 5.3 are 9 and 4, respectively. When using a recursive program, we need to take into account that the programming environment has to maintain a pushdown stack of size proportional to the depth of the recursion. For huge problems, the space needed for this stack might prevent us from using a recursive solution.

Data structures built from objects with fields that are references to objects of the same type are inherently recursive. For example, our definition of linked lists in Chapter 3 (Definition 3.3) is recursive. Therefore, recursive programs provide natural implementations of many commonly used functions for manipulating such data structures. Program 5.5 comprises four examples. We use such implementations frequently throughout the book, primarily because they are so much easier to understand than are their nonrecursive counterparts. However, we must exercise caution in using programs such as those in Program 5.5 when processing huge lists: the depth of the recursion for those methods can be proportional to the length of the lists, so the space required for the recursive stack might become prohibitive.

Some programming environments automatically detect and eliminate tail recursion (when the last action of a method is a recursive invocation) because it is not strictly necessary to add to the depth of the recursion in such a case. This improvement would effectively transform the count, traversal, and deletion methods in Program 5.5 into loops, but it does not apply to the reverse-order traversal method.

In Sections 5.2 and 5.3, we consider two families of recursive algorithms that represent essential computational paradigms. Then, in Sections 5.4 through 5.7, we consider recursive data structures that serve as the basis for a very large fraction of the algorithms that we consider.

Exercises

graphics/icon01.gif 5.1 Write a recursive program to compute lg(N!).

5.2 Modify Program 5.1 to compute N! mod M, so that overflow is no longer an issue. Try running your program for M = 997 and N = 103, 104, 105, and 106 in order to get an indication of how your programming system handles deeply nested recursive calls.

graphics/icon01.gif 5.3 Give the sequences of argument values that result when Program 5.2 is invoked for each of the integers 1 through 9.

graphics/roundbullet.gif 5.4 Find the value of N < 106 for which Program 5.2 makes the maximum number of recursive calls.

graphics/icon01.gif 5.5 Provide a nonrecursive implementation of Euclid's algorithm.

graphics/icon01.gif 5.6 Give the figure corresponding to Figure 5.2 for the result of running Euclid's algorithm for the inputs 89 and 55.

FN and FN+1).

graphics/icon01.gif 5.8 Give the figure corresponding to Figure 5.3 for the result of recursive prefix-expression evaluation for the input + * * 12 12 12 144.

5.9 Write a recursive program to evaluate postfix expressions.

5.10 Write a recursive program to evaluate infix expressions. You may assume that operands are always enclosed in parentheses.

Program 5.5 Examples of recursive functions for linked lists

These recursive methods for simple list-processing tasks are easy to express but may not be useful for huge lists because the depth of the recursion may be proportional to the length of the list.

The first method, count, counts the number of nodes on the list. The second, traverse, calls visit for each node on the list, from beginning to end. These two methods are both also easy to implement with a for or while loop. The third method, traverseR, does not have a simple iterative counterpart. It calls visit for every node on the list, but in reverse order.

The fourth method, remove, removes all the nodes having a given item value from a list. The structural changes for each invocation are the same as diagrammed in Figure 3.5.

 int count(Node h)    {      if (h == null) return 0;      return 1 + count(h.next);    }  void traverse(Node h)    {      if (h == null) return;      h.item.visit();      traverse(h.next);    }  void traverseR(Node h)    {      if (h == null) return;      traverseR(h.next);      h.item.visit();    }  Node remove(Node h, Item v)    {      if (h == null) return null;      if (eq(h.item, v)) return remove(h.next, v);      h.next = remove(h.next, v);      return h;    } 

5.13 Write a recursive program for solving the Josephus problem (see Section 3.3).

5.14 Write a recursive program that deletes the final element of a linked list.

Hint: Use a global variable.


   
Top


Algorithms in Java, Part 1-4
Algorithms in Java, Parts 1-4 (3rd Edition) (Pts.1-4)
ISBN: 0201361205
EAN: 2147483647
Year: 2002
Pages: 158

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net