Creating Form2 and Form3

 

Overview

When the battle lines were drawn in the 1980s between the groups who wanted computer programming to remain more of an art than a science versus those who wanted hard-and-fast mandatory programming standards, the proclamation of both sides was that the software products delivered to the customers should be as useful and inexpensive as possible. Both groups thought they could achieve their goals in totally different ways.

The mandatory standards group had a lot more ammunition to support its claims that the artistic groups were doing far more harm than they were good for the fledgling software industry. For example, there was the case in Springfield, Ohio, where one brilliant IBM assembler language specialist managed to create the software for most of the banks in and around his hometown, and his techniques for placing the assembler code in the source code listing were so mystifying that no one else could figure out what he had done. His deftness at producing spaghetti code was unsurpassed. One summer he announced that he was fed up with the paltry pay he was receiving from the banks and left for a long vacation.

In a month he brought all the banks that used his assembler software to their knees. He had hard-wired into the code every interest rate used by the banks, every fee they charged, and the format of every form the banks generated on their computers. The argument between the banks and the programmer could not be settled in the courts because there were no published standards on how such software should be prepared and documented so other programmers could maintain the code when the originator retired or left or died.

The only reason that assembler code thrived for a time in the 1970s and 1980s was because the region allocated in all digital computers to house both the computational program and all its data was only 64 K (the number of memory locations that could be addressed with 16 wires). Assembler code was significantly smaller than the code produced by a general software language like FORTRAN, so it was greatly desired by corporations that needed large amounts of computing region to support their operations.

But assembler code was extremely expensive to maintain because it was often poorly documented. So management made the move toward programming exclusively in COBOL or FORTRAN, and wrote minimum documentation requirements into software contracts. For example, if a program became too large to fit into a 64 K region, then it was cut up into pieces and placed in multiple regions . When the computation in one region was complete, the intermediate output data was written to a file, and that file became the input data for the next region s work. This was not easy to do, but resourceful programmers made it work. For example, most payroll programs consisted of two parts that were loaded in sequence: Part one computed each employee s pay and wrote the final numbers to a file, and part two picked up the file and printed checks. In January of each year a third part was loaded ” the part that issued W2 and 1099 IRS forms to the employees .

To overcome the spaghetti code tendencies of some programmers, computer science schools began to teach structured programming , a concept where there was only one way into a block of code (at the top) but possibly more than one way out of a block of code. Once a block of code was exited, the only way back in was at the top.

The concept of a local variable versus a global variable was brought to the fore at this time. A local variable was one that was set into one structure to support only that structure. A global variable was declared at the top of a FORTRAN or COBOL program and, because of its location and unique declaration in the source code, it was available to all structures within the region. But what if a local variable happened to be given the same name as a global variable at the top of the source code list? Could the source code sort out whether the variable was local in one part of the region and global in another part?

No, not really. And it was this ambiguity that terrorized large software programs for several generations. At Wright-Patterson Air Force Base there was great concern in one of the flight simulation laboratories because one of its simulations of a Russian surface-to-air missile always broke off the engagement of the target when the missile came within 100 meters of the target and gyrated wildly away from the point where the kill was supposed to occur. What was the problem? Finally, one programmer ended the mystery when she changed very slightly the name of one FORTRAN variable in the final assault package that commanded the last 100 meters of missile flight. Suddenly the simulation worked exactly as it should.

With rampant error generation in large software programs, management made its move to compartmentalize each block of code and tag every variable inside with either a public or a private label, and the compiler writers promised to keep everything straight inside with a new generation of compilers. Most computer programmers were furious! There wasn t even a hint of art left in the programming business any more ” it was all predetermined, rote source code sequences that everyone had to use. As one programmer explained, It s like every structure inside the code was outfitted with an impenetrable Berlin Wall and there was no way to defeat it.

The final blow came with the advent of object-oriented programming. The object-oriented architects wanted all computer sequences along with all the data presented to or removed from those sequences to be located in one place. Around that place the architects built a wall, and if someone wanted access to a variable or procedure inside that wall, she had to ask permission to do so.

Was this supreme acrimony or what? Well, as it has turned out, all the baby steps toward standardization of the computer software industry have indeed saved us from ourselves , and today you have reasonable assurance of reaching your stated goals for large software programs. They might even work! With all this structure built into modern compilers, there is also reasonable assurance that, once a compiler agrees that the form of the code you are writing is acceptable, the code may function properly after it is compiled and run. This was never true in the old FORTRAN and COBOL days.

But the walls that were built around each structure make it more difficult for the individual programmer to access variables or procedures within another structure (one written by someone else). Some compiler architects, like Borland for example, made it relatively easy to gain access from the outside to a variable located inside a form despite the walls between the structures. For example, if a variable named MeToo in Form1 needed the contents of a variable named BigRed that was calculated in Form2, then this simple statement did the trick:

 MeToo = Form2 -> BigRed; 

But allowing this transfer to occur was a violation of the strict object-oriented substratum agreed to around the world. The architects of Microsoft Visual Studio C# would not allow this method to be used to pass information between Form2 and Form1. Instead, they configured the Visual Studio C# compiler such that the programmer must plan ahead and pass a handle to an adjacent form at the time that adjacent form is created ” a handle identifying the form that would receive data later. This passing a handle between forms meets all the requirements imposed on structured programming, object-oriented design, and so forth.

 


Unlocking Microsoft C# V 2.0 Programming Secrets
Unlocking Microsoft C# V 2.0 Programming Secrets (Wordware Applications Library)
ISBN: 1556220979
EAN: 2147483647
Year: 2005
Pages: 129

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net