Section 15.2. What Makes Programs Fast?

[Page 504 (continued)]

15.2. What Makes Programs Fast?

Where does the speed go? You buy a really fast computer, and a photo editor like iMovie or Photoshop seems really fast on it. Colors change as quickly as you change the slider. But larger pictures don't seem to process as quickly in Java as in Photoshop. Why?

[Page 505]

15.2.1. What Computers Really Understand

In reality, computers do not understand Java, C, Visual Basic, Python, or any other language. The basic computer only understands one kind of languagemachine language. Machine language instructions are just values in the bytes in memory, and they tell the computer to do very low-level activities. In a real sense, the computer doesn't even "understand" machine language. The computer is just a machine with lots of switches that make data flow this way or that. Machine language is just a bunch of switch settings that make other switches in the computer change. We interpret those data switchings to be addition, subtraction, loading data, and storing data.

Each kind of computer has its own machine language. Apple computers and computers that run Windows can't run one another's programs, not because of any philosophical or marketing differences, but because each kind of computer has its own processor (core of the computer that actually executes the machine language). They literally don't understand one another. That's why an .exe program from Windows won't run on a Macintosh, and a Macintosh application won't run on a Windows computer. Those executable files are (almost always) machine language programs.

Machine language looks like a bunch of numbersit's not particularly user-friendly. Assembler language is a set of words (or near-words) that corresponds one-to-one with machine language. Assembler instructions tell the computer to do things like store numbers into particular memory locations or into special locations (variables or registers) in the computer, test numbers for equality or comparison, or add numbers together or subtract them.

An assembler program (and the corresponding machine language generated by an assembler) to add two numbers together and store them somewhere might look like this:

LOAD #10,R0    ; Load special variable R0 with 10 LOAD #12,R1    ; Load special variable R1 with 12 SUM R0,R1      ; Add special variables R0 and R1 STOR R1,#45    ; Store the result into memory location #45 01 00 10 01 01 12 02 00 01 03 01 45

An assembler program that might make a decision could look like this:

LOAD R1,#65536   ; Get a character from keyboard TEST R1,#13      ; Is it an ASCII 13 (Enter)? JUMPTRUE #32768  ; If true, go to another part of the program CALL #16384      ; If false, call func. to process the new line 05 01 255 255 10 01 13 20 127 255 122 63 255

Input and output devices are often just memory locations to the computer. Maybe when you store a 255 to location 65,542, suddenly the red component of the pixel at (101,345) is set to maximum intensity. Maybe each time that the computer reads from memory location 897,784, it's a new sample just read from the microphone. In this way, these simple loads and stores handle multimedia, too.

[Page 506]

Machine language is executed very quickly. The computer on which this chapter is being typed has a 900 megahertz (Mhz) processor. What that means exactly is hard to define, but roughly, it means that this computer processes 900 million machine language instructions per second. A 2-gigahertz (Ghz) processor handles 2 billion instructions per second. A 12-byte machine language program that corresponds to something like a = b + c executes on this mid-range computer in something like 12/900,000,000 of a second.

15.2.2. Compilers and Interpreters

Applications like Adobe Photoshop and Microsoft Word are typically compiled. That means that they were written in a computer language like C or C++ and then translated into machine language using a program called a compiler. Those programs then execute at the speed of that base processor.

However, programming languages like Python, Scheme, Squeak, Director, and Flash are actually (in most cases) interpreted. Java can be interpreted, too, but in a subtly different way that is explained later (Section 15.2.3). Interpreted programs execute at a slower speed. It's the difference between translating and then doing instructions versus simply doing the instructions.

A detailed example might help. Consider this exercise from an earlier chapter:

Write a class GraphicsInterpreter that reads in a file of graphics commands. GraphicsInterpreter should know a method interpretCommands that takes a filename as input (a String), reads the graphics commands from the file, and then returns a Picture with the graphics command executed on it. The method interpretCommands starts out by creating a 640 x 480 blank picture, then draws on that, and returns it.

There are two kinds of commands:

  • "line 10 20 300 400" should draw a line from (10,20) to (300,400). You can assume that those are single spaces between the coordinates.

  • "circle 100 200 10" draws a circle where the enclosing rectangle has an upper-left-hand corner at (100,200) and a diameter of 10.

An input graphics command might look like:

circle 20 20 100 circle 300 20 100 line 210 120 210 320 line 210 320 310 320 line 20 350 400 350

Here's a solution to the exercise. The implementation here reads a file, a line-at-a-time into a string. It's checked to see if it starts with "circle" or "line." Using split, it gets chopped into pieces, then each of the little strings (the numbers for the coordinates) is converted to an integer using Integer.parseInt().

[Page 507]
Program 142. Interpret Graphics Commands in a File
(This item is displayed on pages 507 - 509 in the print version)

import*; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; /**  * Class that reads in a file of graphics instructions, and  * executes them, showing the result. Default picture size  * is 640x480.  *  * Format of the file is a bunch of lines of the form:  * Command X Y <parameters>  * Commands are:  * "line" with parameters of start and end X and Y and  * "circle" with the upper-left corner of the enclosing  * rectangle and the diameter of the circle  *  * For example:  * line 10 10 50 70  * circle 10 20 30  *  * Which draws a line from (10,10) to (50,70) and a  * circle at (10,20) with a diameter of 30.  *  * @author Barb Ericson  * @author Mark Guzdial  */ public class GraphicsInterpreter {   /**    * Method to interpret the commands in the given file    */   public Picture interpretCommands(String fileName)   {     String line = null;     Picture frame = new Picture(640,480);     String [] params = null;     int x1, y1, x2, y2, diameter;     Graphics g = frame.getGraphics();     g.setColor(;     // try the following     try {       // read from the file       BufferedReader reader =         new BufferedReader(new FileReader(fileName)); 
[Page 508]
// loop till end of file while ((line = reader.readLine()) != null) { // what command is this? if (line.startsWith("line")) { // Get the parameters for drawing the line params = line.split(" "); // params[0] should be "line" x1 = Integer.parseInt(params[1]); y1 = Integer.parseInt(params[2]); x2 = Integer.parseInt(params[3]); y2 = Integer.parseInt(params[4]); // Now, draw the line in g.drawLine(x1,y1,x2,y2); } else if (line.startsWith("circle")) { // Get the parameters for drawing the circle params = line.split(" "); // params[0] should be "circle" x1 = Integer.parseInt(params[1]); y1 = Integer.parseInt(params[2]); diameter = Integer.parseInt(params[3]); // Now, draw the circle in g.drawOval(x1,y1,diameter,diameter); } else { System.out.println("Uh-oh! Invalid command! "+line); return frame;} } } catch (FileNotFoundException ex) { System.out.println("Couldn't find file " + fileName); fileName = FileChooser.pickAFile(); interpretCommands(fileName); } catch (Exception ex) { System.out.println("Error during read or write"); ex.printStackTrace(); } return frame; } public static void main(String[] args) { GraphicsInterpreter interpreter = new GraphicsInterpreter(); String fileName = FileChooser.getMediaPath("graphics-commands.txt");
[Page 509]
Picture p = interpreter.interpretCommands(fileName);; } }

This solution workssee Figure 15.1 which results from executing this program with the graphics-commands.txt file containing:

circle 20 20 100 circle 300 20 100 line 210 120 210 320 line 210 320 310 320 line 20 350 400 350

Figure 15.1. Results of running the GraphicsInterpreter.
(This item is displayed on page 510 in the print version)

How it Works

The graphics commands are assumed to be in the file whose filename is passed to the interpretCommands method. We open a blank 640 x 480 frame for drawing on, then get the graphics context for drawing on. For each string line in the input file, we check to see if it starts with "line" or "circle." If it's a "line", we chop out the starting x and y coordinates and the ending x and y coordinates by using split on the string. Then we draw the line. If the command is a "circle," we get the two coordinates and the diameter, and draw the circle as an oval whose height and width are both the diameter. At the end, we return the resulting Picture object.

What we've just done is implement a new language for graphics. We have even created an interpreter that reads the instructions for our new language and creates the picture that goes along with it. In principle, this is just what Postscript, PDF, Flash, and AutoCAD are doing. Their file formats specify pictures in just the way that our graphics language does. When they draw (render) the image to the screen, they are interpreting the commands in that file.

While we probably can't tell from such a small example, this is a relatively slow language. Consider the program below. Imagine that we compiled it and ran itwould it run faster than reading the list of commands and interpreting them? Both this program and the list in Figure 15.1 generate the exact same picture.

Program 143. Main with Drawing Commands
(This item is displayed on pages 509 - 510 in the print version)

import java.awt.*; public class GeneratedDrawing{  public static void main(String args[]){   Picture frame = new Picture(640,480);   Graphics g = frame.getGraphics();   g.setColor(;   g.drawOval(20,20,100,100);   g.drawOval(300,20,100,100);   g.drawLine(210,120,210,320);   g.drawLine(210,320,310,320); 
[Page 510]
g.drawLine(20,350,400,350);; } // end main() } // end class

In general, we'd probably guess (correctly) that the direct instructions above will run faster than reading the list and interpreting it. Here's an analogy that might help. Mark took French in college, but he says that he is really bad at it. Let's say that someone gave Mark a list of instructions in French. He could meticulously look up each word and figure out the instructions, and do them. What if he was asked to do the instructions again? He would have to look up each word again. What if they asked him to do it 10 times? He would do 10 lookups of all the words.

Now, let's imagine that he wrote down the English (his native language) translation of the French instructions. He can repeat doing the list of instructions as often as you like very quickly. It takes him no time to look up any words (though it probably depends on what he is being asked to dobrain surgery is out). In general, figuring out the language takes some time that is just overheadjust doing the instructions (or drawing the graphics) will always be faster.

Here's an idea: Could we generate the preceding program? Could we write a program that takes as input the graphics language we invented, then writes a Java program that draws the same pictures? This turns out not to be that hard. This would be a compiler for the graphics language.

Program 144. Compiler for New Graphics Language
(This item is displayed on pages 510 - 513 in the print version)

import*; /**  * Class that reads in a file of graphics instructions, and  * then generates a NEW Java Program that 
[Page 511]
* does the same thing as the instructions. The default picture * size is 640x480. * * Format of the file is a bunch of lines of the form: * Command X Y <parameters> * * Commands are: * "line" with parameters of start and end X and Y and * "circle" with the upper-left corner of the enclosing * rectangle and the diameter of the circle * * For example: * line 10 10 50 70 * circle 10 20 30 * * Which draws a line from (10,10) to (50,70) and a * circle at (10,20) with a diameter of 30. * * @author Barb Ericson * @author Mark Guzdial */ public class GraphicsCompiler { /** Method to write out the prologue for the new program: * All the imports, the class definition, main, etc. * @param file BufferedWriter to write the prologue to **/ public void writePrologue(BufferedWriter file) { try { // Write out the prologue lines file.write("import java.awt.*;"); file.newLine(); file.write("public class GeneratedDrawing{"); file.newLine(); file.write(" public static void main(String args[]){"); file.newLine(); file.write(" Picture frame = new Picture(640,480);"); file.newLine(); file.write(" Graphics g = frame.getGraphics();"); file.newLine(); file.write(" g.setColor(;"); file.newLine();} catch (Exception ex) { System.out.println("Error during write of prologue"); } } /** Method to write out the epilogue for the new program: * Show the picture. Close the main and the class. * @param file BufferedWriter to write the epilogue to **/
[Page 512]
public void writeEpilogue(BufferedWriter file){ try { // Write out the epilogue lines file.write(";"); file.newLine(); file.write(" } // end main()"); file.newLine(); file.write("} // end class"); file.newLine();} catch (Exception ex) { System.out.println("Error during write of epilogue"); } } /** * Method to compile the commands in the given file * @param fileName the file to read from */ public void compileCommands(String fileName) { String line = null; String [] params = null; int x1, y1, x2, y2, diameter; // try the following try { // read from the file BufferedReader reader = new BufferedReader(new FileReader(fileName)); BufferedWriter writer = new BufferedWriter(new FileWriter( FileChooser.getMediaPath(""))); writePrologue(writer); // loop till end of file while ((line = reader.readLine()) != null) { // what command is this? if (line.startsWith("line")) { // Get the parameters for drawing the line params = line.split(" "); // params[0] should be "line" x1 = Integer.parseInt(params[1]); y1 = Integer.parseInt(params[2]); x2 = Integer.parseInt(params[3]); y2 = Integer.parseInt(params[4]); // Now, write the line that will LATER // draw the line writer.write(" g.drawLine("+x1+","+y1+", "+x2+","+y2+");");
[Page 513]
writer.newLine(); } else if (line.startsWith("circle")) { // Get the parameters for drawing the circle params = line.split(" "); // params[0] should be "circle" x1 = Integer.parseInt(params[1]); y1 = Integer.parseInt(params[2]); diameter = Integer.parseInt(params[3]); // Now, draw the circle in writer.write(" g.drawOval("+x1+","+y1+", "+diameter+","+ diameter+");"); writer.newLine(); } else { System.out.println("Uh-oh! Invalid command! "+line); return;} } writeEpilogue(writer); writer.close(); } catch (FileNotFoundException ex) { System.out.println("Couldn't find file " + fileName); fileName = FileChooser.pickAFile(); compileCommands(fileName); } catch (Exception ex) { System.out.println("Error during read or write"); ex.printStackTrace(); } } public static void main(String[] args) { GraphicsCompiler compiler = new GraphicsCompiler(); String fileName = FileChooser.getMediaPath("graphics-commands.txt"); compiler.compileCommands(fileName); } }

How it Works

The compiler accepts the same input as the interpreter (a filename to a file that contains our graphics commands), but instead of opening a Picture to write to, we open a file named "" in the current mediasources directory. We write to the file the start of a class and a main method using the writePrologue methodthe public class GeneratedDrawing, and so on. We also write out the code to create a Picture and a graphics context. Note that we're not really making the Picture herewe're simply writing out the Java commands that will make the Picture. The commands will be executed later when the class GeneratedDrawing is compiled and its main method is executed. Then, just like the interpreter, we figure out which graphics command it is ("line" or "circle") and we figure out the coordinates from the input string. Then we write out to the code file "" the commands to do the drawing. Notice that we're reading the commands when executing the class GraphicsCompiler, and the result is that we're writing the class GeneratedDrawing that will be compiled and executed later. At the end of the method compileCommands, we write out commands to show the frame. Finally we close the file.

[Page 514]

Now the compiler has a bunch of overhead, too. We still have to do the looking up of what the graphics commands mean. If we only have a small graphics program to run, and we only need it once, we might as well just run the interpreter. But what if we needed to run the picture 10 times, or a 100 times? Then we pay the overhead of compiling the program once, and the next nine or 99 times, we run it as fast as we possibly can. That will almost certainly be faster than doing the interpretation overhead 100 times.

This is what compilers are all about. Applications like Photoshop and Word are written in languages like C or C++ and then are compiled to equivalent machine language programs. The machine language program does the same thing that the C language says to do, just as the graphics programs created from our compiler do the same things as our graphics language says to do. But the machine language program runs much faster than we could interpret the C or C++.

Computer Science Idea: Compilers are Actually Programs

Compilers are one of the most magical things in computer science. Look again at the list of graphics commands that generated Figure 15.1. That's a program. Now look again at the Java program that GraphicsCompiler generated. Those are two completely different programs, but they do the same thing. A compiler writes an entirely new program in one language, given input in a different language. It's a program that writes programs.

15.2.3. The Special Case of Java

Originally, Java programs were designed to be interpreted. Java programs didn't originally compile to machine language for whatever computer they were being run on. Java programs compiled to a machine language for a make-believe processora virtual machine. The Java Virtual Machine (JVM) doesn't really exist as a physical processor. It's a definition of a processor. What good is that? It turns out that since machine language is very simple, building a machine language interpreter is pretty easy. It's just like our GraphicsInterpreter except that it reads in the bytes of a machine language program for a JVM, then just does what they say.

The result is that a JVM interpreter can be very easily made to run on just about any processor. That means that a program in Java is compiled once and then runs everywhere. Devices as small as wristwatches can run the same Java programs that run on large computers, because a JVM interpreter can run even on processors that live on one's wristwatch. There's also an economic argument for virtual machines. Imagine that you're writing software for a programmable toaster oven. If the manufacturer decides to change the processor in the toaster oven, you have to recompile your traditional C or C++ programs to run on the new processor. But if both the old and new processor have JVM interpreters, then your Java programs will run on both without change or recompilation. Thus, a virtual machine can mean that you're less bound to a given processor, and the manufacturer has more flexibility to buy the least-expensive processor available.

[Page 515]

On most computers today, Java does execute as machine language. Java can be compiled to machine language. But even when Java is compiled to JVM machine language, modern JVM interpreters are actually JVM compilers. When you tell Java on a Windows or Macintosh computer today "Go run this JVM machine language program," what it actually does is pause a moment, compile the JVM to native machine language, then run the native machine language. Computers are so fast today that you don't really notice the pause while it's compiling.

That's the first part of the answer to the question "Why is Photoshop faster than Java for large programs?" Photoshop is running in native machine code, while our Java programs are running on a JVM interpreterwhich, even if it does compile to native machine language first, is still slightly slower than straight machine language.

Then why have an interpreter at all? There are many good reasons. Here are two:

  • Do you like the Interactions Pane? Did you even once ever type in some example code just to try it? That kind of interactive, exploratory, trying-things-out programming is available with interpreters. Compilers don't let you easily try things out line-by-line and print out results. Interpreters are good for learners.

  • Once a program is compiled to Java machine language, it can be used anywhere from huge computers to programmable toaster ovensas is! That's a big savings for software developers. They only ship one program, and it runs on anything.

  • Virtual machines are safer than running machine language. A program running in machine language might do all kinds of non-secure things. A virtual machine can carefully keep track of the programs that it is interpreting to make sure that they only do safe things, like use only valid indices in arrays.

15.2.4. How Fast Can We Really Go?

The raw power of compiled vs. interpreted programs is only part of the answer of why Photoshop is faster. The deeper part, and one which can actually make interpreted programs faster than compiled programs, is in the design of the algorithms. There's a temptation to think, "Oh, it's okay if it's slow now. Wait 18 months, we'll get double the processor speed, and then it will be fine." There are some algorithms that are so slow, they will never end in your lifetime, and others that can't be written at all. Rewriting the algorithm to be smarter about what we ask the computer to do can make a dramatic impact on performance.

[Page 516]

An algorithm is a description of behavior for solving a problem. A program (classes and methods in Java) are executable interpretations of algorithms. The same algorithm can be implemented in many different languages. There is always more than one algorithm to solve the same problem. Some computer scientists study algorithms and come up with ways to compare and decide which ones are better than others.

We've seen several algorithms that appear in different ways but are really doing the same things:

  • Sampling to scale up or down a picture or to lower or raise the frequency of a sound.

  • Blending to merge two pictures or two sounds.

  • Mirroring of sounds and pictures.

All of these process data in the same way. It's just the data that changespixels for pictures, samples for sounds. We say that these are the same algorithms.

We can compare algorithms based on several criteria. One is how much space the algorithm needs to run. How much memory does the algorithm require? That can become a significant issue for media computation because so much memory is required to hold all that data. Think about how bad (unusable in normal situations) an algorithm would be that needed to hold all the frames of a movie in a list in memory at the same time.

The most common criterion used to compare algorithms is time. How much time does the algorithm take? We don't literally mean clock time, but how many steps does the algorithm require. Computer scientists use Big-Oh notation, or O() to refer to the magnitude of the running time of an algorithm. The idea of Big-Oh is to express how much slower the program gets as the input data get larger. If the data get twice as large, an O(n) algorithm would take twice as long to run, but an O(n2) algorithm would take four times longer to run. Big-Oh notation tries to ignore differences between languages, even between compiled versus interpreted, and focus on the number of steps to be executed.

Think about our basic picture and sound processing examples like increaseRed or increaseVolume. Some of the complexity of these programs are hidden in provided methods like getPixels() and getSamples(). In general, though, we refer to these as being O(n). The amount of time that the program takes to run is proportional linearly to the input data. If the picture or sound doubled in size, we'd expect the program to take twice as long to run.

When we figure out Big-Oh, we typically clump the body of the loop into one step. We think about those functions as processing each sample or pixel once, so the real time spent in those programs is the main loop, and it doesn't really matter how many statements are in that loop.

[Page 517]

Unless there is another loop in that loop body, that is. Loops are multiplicative in terms of time. Nested loops multiply the amount of time that is needed to run the body. Think about this simple example:

> int count = 0; > for (int x=0; x<5; x++)      for (int y=0; y<3; y++)        {count = count + 1;         System.out.println("Ran "+count+" times: x="+x+" y="+y);}

When we run it, we see that it actually executes 15 timesfive for the x's, three for the y's, and 5 * 3 = 15.

Ran 1 times: x=0 y=0 Ran 2 times: x=0 y=1 Ran 3 times: x=0 y=2 Ran 4 times: x=1 y=0 Ran 5 times: x=1 y=1 Ran 6 times: x=1 y=2 Ran 7 times: x=2 y=0 Ran 8 times: x=2 y=1 Ran 9 times: x=2 y=2 Ran 10 times: x=3 y=0 Ran 11 times: x=3 y=1 Ran 12 times: x=3 y=2 Ran 13 times: x=4 y=0 Ran 14 times: x=4 y=1 Ran 15 times: x=4 y=2

How about movie code? Since it takes so long to process, is it actually a more complex algorithm? No, not really. Movie code is just processing each pixel once, so it's still O(n). It's just that the n is really, REALLY big!

Not all algorithms are O(n). There is a group of algorithms that are called sorting algorithms that are used to order data in alphabetical or numerical order. The simplest of these algorithms (like the bubble sort or insertion sort) has complexity O(n2). If a list has 100 elements, it'll take on the order of 10,000 steps to sort the 100 elements with that kind of sort. However, there are smarter algorithms (like the quicksort) that have complexity O(nlogn). That same list of 100 elements would only take 460 steps to process. Those kinds of differences start to have huge real clock-time differences when you're talking about processing 10,000 customers to put them in order for reports...

15.2.5. Making Searching Faster

Consider how you might look up a word in the dictionary. One way is to check the first page, then the next page, then the next page, and so on. That's called a linear search, and it's O(n). It's not very efficient. The best case (fastest the algorithm could possibly be) is that the problem is solved in one stepthe word is on the first page. The worst case is n steps where n is the number of pagesthe word could be on the last page. The average case is n/2 stepsthe word is halfway through.

[Page 518]

We can implement this algorithm as a linear search of an array of strings.

Program 145. Linear Search of an Array

/**  * Class that demonstrates search algorithms  * @author Mark Guzdial  * @author Barb Ericson  **/ public class Searcher {   /**    * Implement a linear search through the list    **/   public static String linearFind(String target, String[] list)   {     for (int index=0; index < list.length; index++)     {       if (target.compareTo(list[index]) == 0)       {return("Found it!"); }     }     return("Not found");   }   /** main for testing linearFind */   public static void main(String[] args)   {     String[] searchMe = {"apple","bear","cat","dog","elephant"};     System.out.println(linearFind("apple",searchMe));     System.out.println(linearFind("cat",searchMe));     System.out.println(linearFind("giraffe",searchMe));   } }

When we run this, we get what we would expect:

> java Searcher Found it! Found it! Not found

But let's use the fact that dictionaries are already in sorted order. We can be smarter about how we search for a word, and do it in O(logn) time (logn = x where 2x = n). Split the dictionary in the middle. Is the word before or after the page you're looking at? If after, look from the middle to the end (e.g., again split the book, but from the middle to end). If before, look from start to middle (split halfway between start and middle). Keep repeating until you find the word or it couldn't possibly be there. This is a more efficient algorithm. In the best case, it's in the first place you look. In the average and worst case, it's logn stepskeep dividing the n pages in half, and you'll have at most logn splits.

[Page 519]

Here's a simple (i.e., not the best possible, but illustrative) implementation of this kind of a search, called a binary search. Add it to the Searcher class. Then modify the main method as shown below.

Program 146. Simple Binary Search

  /**    * Method to use a binary search to find a target string in a    * sorted array of strings    */   public static String binaryFind(String target, String[] list)   {     int start = 0;     int end = list.length - 1;     int checkpoint = 0;     while (start <= end)     { // While there are more to search       // find the middle       checkpoint = (int)((start+end)/2.0);       if (target.compareTo(list[checkpoint]) == 0)       {         return "Found it!";       }       else if (target.compareTo(list[checkpoint]) > 0)       {         start=checkpoint + 1;       }       else if (target.compareTo(list[checkpoint]) < 0)       {         end=checkpoint - 1;       }     }     return "Not found";   }   /**    * Main for testing binaryFind    */   public static void main(String[] args)   {     String[] searchMe = {"apple","bear","cat","dog","elephant"};     System.out.println(binaryFind("apple",searchMe));     System.out.println(binaryFind("cat",searchMe));     System.out.println(binaryFind("giraffe",searchMe));   } }

[Page 520]
How it Works

We start with the low-end marker start at the beginning of the list, and end for the last index of the list (length of the list minus one). As long as there is something between start and end, we continue to search. We compute checkpoint as halfway between start and end. We then check to see if we found it. If so, we're done and we return. If not, we figure out if we have to move start up to checkpoint or end down to checkpoint and we continue searching. If we ever get through the whole loop, we didn't take the "Found it!" return, so we return that we didn't find it.

To test this, we stuck in a line after assigning checkpoint:

System.out.println("Checking at: "+    checkpoint+" start="+start+" end="+end);

Here's the same main running. With this additional statement, we can see how the code narrows in on "apple" then "bear" and then never finds "giraffe."

Welcome to DrJava. > java SearchMethods Checking at: 2 start=0 end=4 Checking at: 0 start=0 end=1 Found it! Checking at: 2 start=0 end=4 Found it! Checking at: 2 start=0 end=4 Checking at: 3 start=3 end=4 Checking at: 4 start=4 end=4 Not found

15.2.6. Algorithms that Never Finish or Can't Be Written

Here's a thought experiment: Imagine that you want to write a program that will generate hit songs for you. Your program will recombine bits of sounds that are some of the best riffs you've ever heard on various instrumentssome 60 of them. You want to generate every combination of these 60 bits (some in, some out; some earlier in the song, some later). You want to find the combination that is less than 2 minutes 30 seconds (for optimal radio play time) and has the right amount of high and low volume combinations (and you've got a checkSound() function to do that).

How many combinations are there? Let's ignore order for right now. Let's say that you've got three sounds: a, b, and c. Your possible songs are a,b,c,bc,ac,ab, and abc. Try it with two sounds or four sounds, and you'll see that the pattern is the same that we saw earlier with bits: For n things, every combination of include-or-exclude is 2n. (If we ignore the fact that there is an empty song, it's 2n 1.)

Therefore, our 60 sounds will result in 260 combinations to run through our length and sound checks. That's 1,152,921,504,606,846,976 combinations. Let's imagine that we can do the checks in only a single instruction (unbelievable, of course, but we're pretending). On a 1.5-gigahertz computer, we can handle that many combinations in 768,614,336 seconds. Spell that out: That's 12,810,238 minutes, which is 213,504 hours, which is 8,896 days. That's 24 YEARS to run that program. Now, since Moore's Law doubles process rates every 18 months, we can soon run that program in much less time. Only 12 YEARS!. If we cared about order, too (e.g., abc vs. cba vs. bac), the number of combinations has 63 zeroes in it.

[Page 521]

Finding the absolute optimal combination of just about anything is always time expensive. O(2n) like this is not an uncommon running time for these kinds of algorithms. But there are other problems that seem like they should be doable in reasonable time, but aren't.

One of these is the famous Traveling Salesman Problem. Imagine that you're a salesperson, and you're responsible for a bunch of different clientslet's say 30, half the size of the optimization problem. To be efficient, you want to find the shortest path on the map that will let you visit each client exactly once, and not more than once.

The best-known algorithm that gives an optimal solution for the Traveling Salesman Problem is O(n!). That's n factorial. To calculate the factorial of a number n you multiply n by (n 1) then by (n 2) all the way down to 1. The factorial of 5 is 5 * 4 * 3 * 2 * 1 = 120.

There are algorithms that take less time to run and give a good path but that path isn't guaranteed to be the shortest. For 30 cities, the number of steps to execute a O(n!) algorithm that finds the shortest path is 30! or 265,252,859,812,191,058,636,308,480,000,000. Go ahead and run that on a 1.5-gigahertz processorit won't get done in your lifetime.

The really aggravating part is that the Traveling Salesman Problem isn't some made-up, toy problem. There really are people who have to plan shortest routes in the world. There are similar problems that are basically the same algorithmically, like planning the route of a robot on a factory floor. This is a big, hard problem.

Computer scientists classify problems into three piles:

  • Many problems (like sorting) can be solved with an algorithm whose running time has a complexity that's a polynomial, like O(n2). We call these class P (P for Polynomial) problems.

  • Other problems, like optimization, have known algorithms (solutions to those problems) but the solutions are so hard and big that we know we just can't solve them in a reasonable amount of time even for reasonable amounts of data. We call these problems intractable.

  • Still other problems, like Traveling Salesman, seem intractable, but maybe there's a solution in class P that we just haven't found yet. We call these class NP.

One of the biggest unsolved problems in theoretical computer science is either proving that class NP and class P are completely distinct (i.e., we'll never solve Traveling Salesman optimally in polynomial time), or that class NP is within class P.

You might wonder, "How can we prove anything about algorithms?" There are so many different languages, and different ways of writing the same algorithm. How can we positively prove something is doable or not doable? We can, it turns out. In fact, Alan Turing proved that there are even algorithms that can't be written!

[Page 522]

The most famous algorithm that can't be written is the solution to the Halting Problem. We've already written programs that can read other programs and write out other programs. We can imagine a program that can read one program and tell us things about it (e.g., how many print statements are in it). Can we write a program that will input another program (e.g., from a file) then tell us if the program will ever stop? Think about the input program having some complex while loops where it's hard to tell if the expression in the while loop is ever false. Now imagine a bunch of these, all nested within one another.

Alan Turing proved that such a program can never be written. He used proof by absurdity. He showed that if such a program (call it H) could ever be written, you could try feeding that program to itself as input. Now H takes input, a program, right? What if you modified H (call it H2) so that if H would say "This one halts!" H2 would instead loop forever (e.g., while (true)). Turing showed that such a setup would announce that the program would halt only if it loops forever, and would halt only if it announces that it would loop forever.

The really amazing thing is that Turing came up with this proof in 1936almost ten years before the first computers were ever built! He defined a mathematical concept of a computer called a Turing machine that he was able to make such proofs about before physical computers were ever built.

Here's another thought experiment for you: Is human intelligence computable? Our brains are executing some process that enables us to think, right? Can we write down that process as an algorithm? And if a computer executes that algorithm, is it thinking? Is a human reducible to a computer? This is one of the big questions in the field of artificial intelligence.

15.2.7. Why Is Photoshop Faster than Our Programs in Java?

We can now answer the question of why Photoshop is faster than our programs in Java. First, Photoshop is compiled, so it runs at raw machine language speeds.

But the other part is that Photoshop has algorithms that are smarter than what we're doing. For example, think about the programs where we searched for colors, like in Chromakey or in making hair red. We know that the background color and the hair color was clumped next to one another. What if, instead of linearly searching all pixels, you just searched from where the color was what you were looking for, until you didn't find that color anymoreyou reached the boundary. That would be a smarter search. That's the kind of thing that Photoshop does.

Introduction to Computing & Programming Algebra in Java(c) A Multimedia Approach
Introduction to Computing & Programming Algebra in Java(c) A Multimedia Approach
Year: 2007
Pages: 191 © 2008-2017.
If you may any questions please contact us: