A Short History of Programming

   

To understand why object-oriented programming is of such great benefit to you as a programmer, it's useful to look at the history of programming as a technology.

In the early days of computing, programming was an extremely labored process. Each step the computer needed to take had to be meticulously (and flawlessly) programmed. The early languages were known as machine languages, which later evolved to assembly languages. Machine language programming required programmers to code CPU instructions to manipulate individual memory locations to achieve a desired result. Assembly language programming provided a minimal level of abstraction by combining commonly used sequences of instructions into higher-level instructions that could be referred to by name .

The painstaking detail required to program this way can be seen in an example of evaluating a simple expression. Consider a case in which you need to evaluate (4 * 5 ) + 7 and assign the result to a variable. In a higher-level language, this requires you to write a statement of the form "c = 4 * 5 + 7." However, to achieve the same result using machine code, you would have to write individual instructions that communicate steps such as set register A to 4, set register B to 5, multiply register A by B storing the result in C, set register A to 7, add register A and C, and store in memory location 15234.

Considering the number of instructions you would need to perform this simple computation, you can probably get a good idea of what machine and assembly language programming were like for a typical programmer. The instructions were a bit more cryptic than those you are accustomed to working with in modern languages and meticulous specification of detail was a must (computers are not forgiving at all when presented with imprecise directions).

Procedural Languages

Assembly languages were easier to work with than machine languages, but programmers soon saw the need to move beyond CPU instructions and work at a higher level of abstraction. This abstraction was achieved through procedural languages, which provided the programmer with functions and data types that could be manipulated without so much concern for the underlying machine instructions. These functions, or procedures, acted like black boxes that each implemented some useful task. For instance, you might create a procedure to write something to the screen, such as writeln in Pascal or printf in C. The initial purists of this type of programming believed that you could always write these functions without modifying any data that existed external to them. As an example, you clearly would not expect a call to printf or writeln to modify the string you pass in after printing it to the screen. In essence, the perceived ideal was not only to build a black box that hid implementation details, but also one that had no side effects related to the parameters passed into it.

For procedures that perform simple tasks , such as outputting information to the screen, this principle of not modifying external data is easy to satisfy . However, as you can imagine, applying this constraint when more complex operations are involved is difficult at best. Designing a system so that functions only introduce data changes through their return values is a significant restriction, so this goal was, for the most part, abandoned . This brought about increased flexibility, but with tradeoffs. As functions began changing data outside their scope (for example, by declaring C functions that accept pointers as arguments), problems with coupling began to surface. Because the functions were now changing data outside of their scope, testing became increasingly difficult. Coupling between a function and the code that called it meant that each function had to be tested not only individually, but also within the context of its usage to make sure that variable changes it introduced through its parameters were not corrupting other parts of a program. Individual black boxes weren't so black anymore because changes to their implementations required other functions that used them to be retested as well to make sure data updates were still handled correctly. This complexity grew dramatically with increasing program size and added to the need for the automated software testing industry of today.

Structured Development

Most early programming efforts were judged solely on whether they worked for their originally intended use. Not until software applications began to increase in size and complexity was attention focused on how well code was actually written and how well it could be maintained . This led to the implementation of structured development practices. Structured development didn't necessarily change the languages that were being used, but rather provided a new process for designing and writing software. Under a structured development philosophy, programmers were expected to plan 100% of a program before writing a single line of code. When a program was planned for development, huge schematics and flow charts were produced showing the interaction of each function with every other and how each piece of data would eventually flow through the program. This heavy precode work proved to be effective in some cases, but limiting for most. The shortcomings here might have resulted in large part from an emphasis on good documentation and not necessarily great design.

In addition, when programmers were pushed to predesign all their code before actually writing any of it, some flexibility and support for creative solutions were lost. Programming became institutionalized. Good programs tend to result from experimentation in some areas built upon a foundation of a solid underlying design. Structured development limited this by requiring complete specification of implementation details up front.

Even with that said, you should not think that current development approaches overlook the importance of the design phase. The opposite is, in fact, true. The difference is that most current methods stress that the design and construction phases of a software project should be iterative. Unlike structured development, current methods specifically allow developers to refine a design as requirements solidify and new solutions (or problems) are uncovered.

Object-Oriented Programming

Object-oriented analysis, design, and programming have now come into prominence as a successor to structured development. This did require some language changes to support the associated constructs, but the more significant impact has been to change the way developers think about the problems they must solve with software and the associated systems they must design. The resulting programming technique goes back to procedural development (by emphasizing black boxes), continues the advancements made through structured development, and, most importantly, encourages creative programming design.

Using an OOP paradigm, the objects associated with a problem and its software solution are represented as true entities in a system, not just corresponding data structures. Objects aren't just numbers, like integers and characters ; they also contain the functions, or methods in Java terminology, which relate and manipulate the numbers . In OOP programming, rather than passing data around a system openly (such as to a globally accessible function), messages are passed to and from objects via method calls that instruct an object to perform a certain task using the data it is provided.

As stated earlier, object-oriented programming really isn't all that new; it was developed in the 1970s by the same group of researchers at Xerox Parc that brought the world GUI (graphical user interface) technology, Ethernet, and a host of other products that are commonplace today. Why did OOP take so long to gain wide acceptance? For one thing, OOP requires a paradigm shift in development, and the inertia found in development organizations and their existing systems is a challenge for any new technology to overcome . In addition, the available hardware at the time OOP was introduced was not up to the job. For programming languages, less abstraction tends to correspond to less memory and CPU usage. Hardware capabilities continue to grow in leaps and bounds; however, while limited capabilities were more of a concern, procedural languages remained an attractive option. Now, increases in available (and affordable) memory and CPU horsepower have made development cost and maintainability much more significant drivers in architecture and design choices than the hardware requirements for a system.

The question now is where to start. Perhaps the first, and most significant, concept each programmer who wants to do OOP design and development must understand is the object itself. An object is a robust bundle that contains both data and the methods that operate on that data. This bundling provides significant advantages, such as code isolation, over alternate approaches. Instead of worrying about innumerable potential uses, a programmer can define an object's methods with complete knowledge of the data upon which it will work. Besides this careful control over method use, the nature of OOP allows methods to be reused and selectively replaced as object hierarchies are built up to satisfy increasingly complex requirements.

   


Special Edition Using Java 2 Standard Edition
Special Edition Using Java 2, Standard Edition (Special Edition Using...)
ISBN: 0789724685
EAN: 2147483647
Year: 1999
Pages: 353

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net