Chapter 2: Personal Communication with a Digital Computer

 

Overview

Even in the earliest days of the computer revolution, our computer forefathers were quick to recognize that the human-to-computer interface was going to be the really tough nut to crack. Each group had its own agenda for how this interface should be constructed . They knew that the lack of standards and the strong desires of dominant groups in the software world could result in the proliferation of lots of bad ideas. This had happened many times in the hardware world, with predictably bad (and expensive) repercussions .

For example, when the electric lightbulb emerged in the 1900s as the most desirable means for illuminating a room, there were over 160 configurations proposed for how the metallic end of the bulb should look (and be placed into a receiver that provided the electrical power). It took General Electric and Westinghouse 20 years to outflank all their competitors and enforce a standard for lightbulbs that screwed into a porcelain and metallic receptacle to produce light.

In the computer world IBM took the lead and called most of the hardware shots early in the game. By 1980 Microsoft began to call all the shots in the software game. It didn t take an IBM or a Microsoft to see that the principal means of communicating with a computer should be the keyboard.

There were some disagreements on the layout of the keyboard. After all, the original typewriter was created and patented to provide a means for blind people to communicate with the outside world one keypunch at a time. Once these blind people learned the layout of the keyboard they could write letters to their heart s content ” so long as the recipient of the letters was a sighted person. The notion of a more modern keyboard layout lost most of its momentum when the United States Congress refused to intervene in the free economy process and dictate a newer , standard layout for the keys on a keyboard.

In the 1970s only one in four high school students learned to type on a typewriter ” they were the people destined for the business world. Today all high school students who are sighted and have two hands learn how to type or keyboard. It is a necessary skill for employment in the modern world.

Loading a program into a small computer and running it in the 1970s was a real adventure. For example, at the U.S. Air Force Academy Aeronautics Department a small Honeywell computer was purchased to handle data taken from one low-speed wind tunnel and one supersonic tunnel that the cadets used in the aeronautical laboratory. The computer would reduce the data from the pressure transducers in the wind tunnels and produce data that showed how a particular aerodynamic shape performed inside the tunnel. To write the program that reduced the data, a keyboard was used to produce a stream of characters that were written to a paper-tape puncher, which produced a roll of paper tape full of small holes. The compiler was loaded onto the Honeywell computer from a data tape, and the paper tape was fed into the computer to convert source code to machine code. Once the machine code was created (and placed onto a second paper-tape roll), the compiler was removed from the computer.

The second paper tape was read into the Honeywell machine to load it with machine code. Any area in memory that was not used by the machine code version of the data reduction program was available for data storage. All of this loading and unloading was necessary because memory in the Honeywell was too small to contain a compiler, machine code, and space for data at the same time. Once all this was done the machine was ready to crunch numbers . Once we got the program to work we never changed its source code and we kept the second paper tape (the one with the machine code) in a secure place in the lab ” the thought of losing it and having to redo the whole process was more than we could stand.

With the advent of a digital computer processor on a machine with sufficient memory (enough to store a compiler, source code, and space for the machine code that was produced by the compiler), most of the agony of data processing went away.

Meanwhile a new use for small computers was devised in the business and scientific communities ” a class of software called word processors. Since there was no windowing on the screens and no mice in the window, the best that a digital computer could do was to present one line of text at a time (or several lines that wrapped to the next lower line when the upper line was full of text). The keyboard was fitted with a series of function keys across the top to aid the word processing tasks . The smart operator learned a myriad of combinations of function keys plus the Alt key, Shift key, or Ctrl key, etc. A learned operator could move about the text with lightning speed using function key commands to move to the next paragraph or the next page, change text fonts, correct misspelled words, insert new text, or delete text. It was primitive, but it worked.

The secret to efficient word processing at that time was to not make a mistake when you typed in the text in the first place. It was the word processor s needs that finally brought the issue of alternative inputs to a digital computer to the fore. The standard computer keyboard with all the function keys was simply not enough. But what to do?

It was the Macintosh computer that first demonstrated the capability to produce a screen with a white background and black letters on the foreground. They did not use the term windows , as the term was already patented by Microsoft. But both competitors went at it hot and heavy ” the Macintosh had great graphics (and continues to excel in that arena) and the PC had great compiler support in its FORTRAN language. Then Macintosh invented the mouse and for the first time in computer history the user could traverse freely across the screen with a mouse cursor and accomplish all kinds of wonderful computer tasks. Microsoft followed suit quickly. The concept of clicking on a particular spot on a window to cause some task to execute was almost magical !

Once the combination of windows with a mouse cursor was perfected, the word processors became 100 times more efficient, and software manufacturers predicted that there were more efficiencies to follow.

It is undetermined who invented the first control; it was probably a forerunner of the push button that allowed the user to call up another screen temporarily. It is also undetermined who invented the main menu that is now displayed across the top of every commercial software program in existence today. Word processor users wanted as large a screen as possible to type their words, and the placement of help tools around the edges of the screen decreased the size of the word-processing space. The main menu with its pull-down submenus eliminated this dilemma. But the main menu with pull- downs had to await the birth of the mouse to be effective.

History will record that the software gurus of that era were champing at the bit for increased capability for their software while the architects of operating systems were far behind in their pursuit of computer excellence. The architects of operating systems will in turn point to the hardware gurus who were not cooperating ” the hardware systems were incapable of addressing sufficient computer memory to accomplish the great feats that the software applications teams envisioned . Recall that, as late as 1998, a PC or Macintosh owner who replaced the small, original equipment hard drive in his machine with a much larger hard drive was forced to create multiple drive sectors on that hard drive (drive C:, drive D:, drive E:, etc.).

The PCs and Macs originally provided only 16 wires for addressing memory, and 16 wires can only address 2 16 addresses (65,536 addresses, commonly called 64 K). This is the reason why computer programming prior to the 1990s was limited to 64 K of region ” the computer could not process a program whose source code occupied more than 64 K. Some Macs limited the source code region to 32 K. If this limit of 32 or 64 K was reached, then the programmer had to break up the program in smaller pieces to compile and run it.

In 2000 both PCs and Macs confronted the issue of addressing large areas of memory by creating operating systems that could address hundreds of millions of addresses. No longer was it necessary for example, to divide a 100 MB hard drive into multiple sectors like drives C: through ZZ:.

The creation of cheap random access memory (RAM) and read-only memory (ROM) made life even better for computer users because their computers could access large amounts of code at computer start-up and refrain from reading or writing to the hard drive most of the time. This made programs run faster because the time required to access the hard drive is eight to ten times longer than RAM/ROM access.

Meanwhile, the clocking speeds of CPUs increased dramatically, from a nominal 10,000 cycles per second in the early 1960s to 4 to 6 million cycles per second in the late 1990s.

 


Unlocking Microsoft C# V 2.0 Programming Secrets
Unlocking Microsoft C# V 2.0 Programming Secrets (Wordware Applications Library)
ISBN: 1556220979
EAN: 2147483647
Year: 2005
Pages: 129

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net