Chapter 1: A Quick Dash through the Second Half of the Twentieth Century

 

By the early 1960s several research laboratories and electronic firms around the world had demonstrated the ability to fabricate high-speed digital central processing units (CPUs) that could step through a sequential set of commands and do errorless arithmetic. A lot has changed since that time, but digital computers continue to step through sequential series of commands and produce arithmetic, linguistic, and visual results that are reported to the world as hard copy (printed pages), data tapes, screen presentations, files, speaker outputs, music, light shows, CDs, etc.

The early computers were surrounded by an elite group of scientists and engineers who learned to create machine code, the sequential set of commands that a CPU deciphers to produce outputs. Each word of machine code started out at 8 bytes per word, 8 bits per bite (a bit is a 0 or a 1 for visual reference, but on the hardware it is most commonly represented bya0DCvoltage or a +4 DC voltage). So the first machine code word consisted of 64 bits and each bit had two states. The 8-byte word was quickly replaced by the 16-byte word, and 32 bytes followed in the late 1980s. Today the machine code 32-byte word is the most common, but the 64-byte word exists in some computers ( especially number- crunching computers, like the Cray scientific computer). Some of the most sophisticated 64-byte CPUs that were crafted in the 1990s were placed in game platforms like the PlayStation 2. The larger the machine code word size , the easier it is for programmers to program the machine (because one 64-byte word can command so many variations within the CPU).

Those early-generation scientists and engineers knew that a more sophisticated means had to be devised for creating machine code because it took weeks and months for such a group to create machine code that the CPU could process in under one minute. With their primitive machine code generation techniques there wouldn t be enough people in the entire world to service these digital machines. Everyone would have to become a programmer.

Computer hardware companies sprang up everywhere, with most of their employees focused on producing faster and more reliable CPUs and ancillary machinery (later known as peripherals) that could feed input data to the CPU in a predictable fashion. A much smaller group investigated better techniques for communicating with their CPUs (the human-to-machine interface), and they became the software gurus.

IBM (International Business Machines) quickly became a leader in this new industry, and thanks to some innovative scientists at the Massachusetts Institute of Technology (MIT) they became proficient at building storage devices for data. (Remember the pictures of a tiny glass-composition bead suspended in a matrix of wires about the thickness of a human hair? That was the first data storage device. Each glass bead stored one bit of information.) These storage devices soon became known as computer memory. Next , IBM constructed a mini-software device known as an assembler that allowed people besides the elite to communicate with a digital computer. Simple assembly statements were converted to machine code using a converter. This was a worthy initial effort at easing computer programming woes.

Soon every manufacturer of computer machinery advertised its own operating system and its own assembler. A person who wanted to specialize in software to command a specific brand of computer had to immerse himself in the details of the operating system and its assembly language.

The notion of a computer language that was not fixed at the machine code level (an intermediate language, one that received inputs from human beings and produced machine code for several types of digital computers) piqued the interest of scientists all over the world. In the United States it was left to IBM to announce the first general-purpose intermediate language system, a system named Formula Translation, which was shortened to FORTRAN. The huge code package that transformed programmers entries into IBM machine code was known as a compiler. The compiler read each line of instruction that was created by a programmer on an 80-column punched card. At this time the entry point to the computer was the card reader, a mechanical device that regularly ripped and tore at one s thick packet of punched cards and created bedlam at the computer center. Each punched card contained one FORTRAN statement. A typical civil engineering aircraft structures program created at a university was 2000 to 3000 cards long, and the owner of the card deck always kept his fingers crossed that the reader would not fault during the reading, which took 2 to 3 minutes to complete. At the first faulty read the device stopped its processes and several red lights began blinking. It was time to start the read process all over again.

This collection of punched cards became known as the source code ” another term that has survived from that era. The programmer was the source of the source code, and his only means of creating source code was the punched-card machine built by IBM.

In England and Europe a consortium of scientists, engineers, and professors who cared about intermediate languages gathered their thoughts around a language named Pascal (named after Blaise Pascal, a 17th-century French mathematician who proposed the first adding machine). When the European and North American groups combined their interests they formed the IEEE (Institute of Electrical and Electronics Engineers), which laid the groundwork for committees of interested persons to prepare standards that these intermediate languages should follow in the machine code creation process. It was the Europeans who devised a standard method for ending each computer statement with the semicolon (;). This standard is used today and recognized by most modern compilers. When a FORTRAN program was placed on an electromagnetic tape the separator between statements was the character count ” every 80 characters a new card was being read. This is where the term 80-80 listing came from. The computer would read a wad of punched cards and produce this listing, the contents of 80 punched cards (each 80 columns wide) printed on one computer page, tightly cramped together to save paper.

The international standards committees were quick to agree on what source code should look like. For example, it was agreed that the name of the variable that was to receive data should always be placed on the left side of an equal sign (or some alternative sign), and the mathematical formula that provided an answer for that variable should appear to the right of the equal sign.

Mathematical expressions were always enclosed in standard parentheses, and groups of source code statements (other than math) were enclosed in curly brackets or square brackets ({} or []). The standards came just in the nick of time ” in the U.S. there were thousands of would-be programmers anxious to learn how to program with IBM, Burroughs, HP, and Honeywell compilers, assemblers, and computing machinery.

The FORTRAN language had some notable competitors in the 1960s, like Algol, the language invented by the Burroughs Corporation. The interesting thing about Algol was its predisposition to require the grouping of statements in packets that could only be entered at the topmost statement (one entry point, multiple exit points). Later we called this technique structured programming. All modern compilers use structured programming.

And then there was COBOL (Common Business-Oriented Language). This language used statements that were more like the English language than any other, and people loved it. It also had one other quality that the creators of FORTRAN wished they had included: the ability to bring character strings out of the mass of source code and deposit those strings at a convenient place outside of the code. What did this do for a corporation that used COBOL? Several variations of a basic program could be compiled and presented to the customer by making small changes to the list of character strings on an off-line computer. Even today you can incorporate a new bank anywhere in the world, call a corporation in Cleveland, Ohio, and get yourself a complete digital computer operating system for your bank by answering a long list of questions that define your bank precisely (like its name, etc.). The code used to produce this operating system is COBOL, which has been in place for 40 years .

The most widely used programming languages of the 1970s and 80s all had one common attribute ” they were always willing to steal any new ideas for better programming techniques from their competitors.

During this time IBM had committed to developing a personal computer or desktop computer that would provide to the individual employee some limited computing capability at his own desk. IBM found itself without a suitable operating system for this tiny machine, and a man named Bill Gates stepped up to the challenge and delivered the first Microsoft Corporation PC operating system in less than a year. When the PC became popular it hastened the death knell of the huge mainframe machines that IBM had created for two generations. Some saw this revolution coming, but most did not. There were no servers to connect groups of office PCs in those days ” the communication links between the PCs and a central server did not exist.

While the hardware gurus were inventing new products every year, the software gurus were hard at work also. It was generally accepted that digital computers would become faster and more reliable every year, so the software gurus were able to look into the future and create compiler architectures that would serve these greatly upgraded computing machines.

Some of the laboratories in the U.S. and Europe began experimenting with languages more powerful than FORTRAN. They wanted more versatility in programming statements, easier ways to convey source code from place to place (other than shipping boxes of punched cards across the country), and programming methods that produced fewer errors in the final product delivered to the customer. In the 1960s and 70s more time was spent in testing and proofing major programs than was spent in developing and coding the program.

In the early 1970s, a Bell Laboratories language called the C language began to appear in the computer software technical journals. There are a ton of funny stories about the C language that followed ” we will repeat only one of them here:

The Bell Laboratories on the East Coast had an in-house programming language that they called the B language. It claimed a lot of advantages over the stiffly formulated IBM FORTRAN language, and the scientists at Bell were constantly proposing new and wonderful ways to make the B language more useful. For example, it eliminated all punched cards and read each source code statement directly from a file listing ” three cheers for the semicolon! Then one day Bell Labs got a new leader. When he discovered how many people were spending time on the B language, he put out a sharply worded notice to the entire staff: There will be no more work accomplished on the B language; this is a business environment, not a university. Or words to that effect.

This announcement came out on a Friday, and according to a reporter at the New York Times , On the following Monday morning the C language was born at Bell Labs.

There is no reason to enumerate the programming advances that the C language offered to the software world in the 1980s. But it is notable that every one of these advances was incorporated into various forms of the FORTRAN language by the end of the 1990s (by every major computer hardware manufacturer, most of whom had their own brand of FORTRAN available for their machines). The FORTRAN language also borrowed ideas from COBOL and Pascal.

Then along came ADA, the language that borrowed the best from FORTRAN, COBOL, Algol, Pascal, and C (not to mention a whole raft of lesser-known computer languages that made their debut in Europe or the Pacific Rim countries ). But ADA was doomed. Most federal and state government contracts that required computer programs as a deliverable required that the development be done in the ADA language. The American software industry therefore attached large charges to their software contracts because the language was new, untested, and required considerable training of its existing programming staff to use this newfangled language. ADA is a great language; it requires strong type casting of all its variables to eliminate a host of nagging errors that always persisted in the simple FORTRAN deliverables, and the ADA compiler that processed the individual statements to form an executable program was light-years ahead of any other compiler because of its error-checking capabilities during code development. But the C language quickly stole the thunder of ADA.

By this time structured programming had become a software industry hallmark ” no more would irrational programmers be able to write spaghetti code that sent its tentacles into all parts of the operating system and data storage devices on a digital computer. This was the same structure that Algol hinted at decades earlier and ADA demanded.

The C language achieved wide acceptance around the world for many reasons ” one possible reason was the fact that it was invented at an American laboratory and not by the federal government and Department of Defense in Washington, D.C. Soon the C language evolved into the C++ (C plus-plus) language (developed by Bjarne Stroustrup, of Bell Labs). C++ is an object-oriented programming (OOP) language. OOP compilers were the first to create packets of executable software and all the data to support that software in one place ” the packets were movable or floatable in memory. At first it was not clear why C++ was necessary, but most C programmers immediately jumped into the new language so they would be able to program in the latest and greatest computer language. One advantage was that C++ added greatly to the programmer s ability to manipulate strings of characters ” something that was left out of the C language.

The object-oriented programming wave of influence began with a language named Smalltalk, which was invented at the Xerox Palo Alto Research Center. It would have made its mark much earlier in the 20th century if there had been PC and Macintosh operating systems that could float hundreds of millions of storage locations inside one machine ” but this was not to be until Windows 2000 and the Macintosh 21st century compilers appeared at the local computer stores. With object-oriented programming the programmer had to learn one important word: new. Every time a code group was exercised within a program, that group was generated anew from a declared class of code packets, and the group was used only one time before it was set loose to float around in memory. Could such an action fill up memory on the worst of occasions? You bet! So what should the operating system provider do to combat this evil? Provide a garbage collector to grab up those loose pieces of code and data, cancel them out, and return the space used by the floaters to useful memory available to the program being executed. It sounds a little goofy, but the operating system gurus made it work.

How do the modern operating systems accomplish this sort of garbage collection task? I have no idea, but you will note (several chapters later) that every modern Microsoft compiler includes in its executables a resident garbage collector and you had better not remove it from your source code if you know what s good for you!

When the 21st century began, Microsoft hired some of the finest computer minds in Europe and the U.S. and launched into the C# era (pronounced C sharp ). The group was led by Anders Hejlsberg. A worldwide committee was appointed to propose international standards for this language that would become the new world standard.

Microsoft C# is not totally unique. For one thing, another American corporation, Sun Microsystems, released a computer language named Java that created intermediate code rather than machine code, and it achieved phenomenal acceptance by the business world almost immediately. Intermediate code is not machine code, so those who distributed Java programs had to include the Java cookie (a run-time language) to interpret the intermediate language product and produce executable (machine) code. Programmers who write in Java create the source code to generate intermediate code one time for all machines (PC, Macintosh, Sun, etc.). Then there is a separate Java cookie for each type of computer.

The Microsoft C# compiler also needs a cookie ” a very large cookie. Today, if you own a PC or a Mac or any other desktop computer and you have ever hooked up to the Internet, then you have a considerable number of cookies on your machine that interpret incoming Internet code to produce images on your screen.

The success of Java hastened the introduction of the Microsoft C# compiler in 2000.

What does Microsoft C# promise to its users? These are some of the reports :

  • The intermediate language code that is generated by the Microsoft C# compiler will match up to a cookie on most desktop machines to produce executable code on that machine. The cookie is called a Common Language Runtime (CLR) package and is about 32 MB in size. The CLR package is part of the Visual Studio .NET Framework, which is designed to accept multiple languages in its integrated development environment (IDE), including Basic, C++, J#, and C#.

  • C# programs run slower than C or C++ programs because they include environmental checks that C and C++ do not make (for example, no writing to portions of the computer reserved for the operating system, etc.). Slower but safer.

  • C# accommodates object-oriented programming by including a built-in garbage collector (to keep memory clear of unused code and data).

  • C# never allows a variable to be used in the program until it has been initialized in some way.

  • The Microsoft C# compiler is the most exhaustive compiler ever written by Microsoft. It makes checks for proper type casting of every variable, similar to the advances made by the ADA compiler a decade ago. When a program is compiled and run in the IDE, it runs in a Debug mode as a default. This means that anytime program execution stops, the line of source code where the fault occurred is highlighted to show the programmer what went wrong. The error messages that the Microsoft C# compiler can produce during compile time seem to be uncountable.

  • During project development the compiler debugger checks every keystroke made by the programmer, and error messages appear immediately. This is a feature found a decade ago in the ADA compilers.

  • Future releases of the Microsoft C# compiler will include methods for immediate transfer of code and data to HTML format to enable program operability on the Internet.

  • The capability to handle multimedia is promised in future releases of Microsoft C#.

  • Because Microsoft C# is so tightly structured, it opens the way for third-party software corporations to produce ancillary software that performs checks on compiled code (to an even deeper level than the original Microsoft C# compiler) and prebuilt code that serves a particular industry (such as financial, communication, entertainment, computer-aided design, and transportation). In Chapter 3 you will be exposed to a toolbox of about 60 prebuilt code packets (each represented by an icon). Third-party software groups may expand this selection to 500 to 600 packets.

If you are a new programmer just entering the field, you will deal primarily with multimedia in the future. The day will come shortly when most corporations own little or no software. They will lease all their software from a central distributor (like Microsoft) who guarantees that the software downloaded by corporations will be the perfect software for its tasks . Once corrections to the software are made at the central repository, all users will have access to that corrected software that afternoon.

In the next chapter we will discuss how a user communicates with a digital computer.

 


Unlocking Microsoft C# V 2.0 Programming Secrets
Unlocking Microsoft C# V 2.0 Programming Secrets (Wordware Applications Library)
ISBN: 1556220979
EAN: 2147483647
Year: 2005
Pages: 129

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net