What a Long, Strange Trip It s Been

What a Long, Strange Trip It's Been

In the dim, dark past of the early 1980s, shrouded in the mists of time, systems-applications developers used the C language to program the various single-tasking, small computer operating systems that populated the landscape. MS-DOS emerged as the premier operating system. It was fast and lightweight because it was written in assembler to take advantage of the limited memory and speed these small systems had. The application programming interface (API) for DOS was really a small set of software interrupts. Some of you might fondly remember building interrupt handlers for communications. If you wanted to run a communications program, you would interrogate the communications port to see whether a byte arrived down the wire. Because DOS programs are single threaded, DOS kept command of the computer at all times. A communications program would sit in a loop and constantly peek in the communication buffer to check on the arrival of a new byte. This sort of program would be considered less than optimal with today's software development challenges.

The Windows operating system API was designed in the early 1980s, and programmers still used C to program it. This API was cryptic, to be generous. It contained hundreds of functions, each with a long and sometimes counterintuitive name. Moving to Windows was a sea change—a monumentally different way of approaching computer programming. Programmers now had to deal with features like a graphical user interface and multitasking. This move was initially resisted by many developers, who said that Windows programs were slow and difficult to write. Windows would never catch on.

note

For you purists, multiple programs were not really run simultaneously. However, with early cooperative multitasking and, later, preemptive multitasking, the user was given the impression that many programs were running at the same time. The Windows operating system simply rotated CPU time slices so fast—faster than humans could discern—that several programs appeared to run simultaneously. But because the standard desktop computer has only one CPU, it can, of course, run only one program at any one time. Essentially Windows provided a good implementation of smoke and mirrors.

As more and more programs had to cohabit on the same machine, a mechanism was needed that let the system respond to the correct program. For example, I might be writing a document in Word and want to add a number in Excel. How in the world would the operating system handle this? The ingenious way was through events. In the event-driven model, the operating system sends messages to specific programs when the user does something. The individual program intercepts the message and fires an event. It is the program that is responsible for responding to an event fired within the program.

If I click on Excel after working in Word to enter a number, the operating system gives Excel the focus. Excel now becomes the active program. When I enter a number from the keyboard, Windows sends the number (bundled up in a message) to Excel. An event in Excel fires, and any code within the event handler is responsible for doing something with the number.

Windows changed the way to program in fundamental ways. Instead of a program sitting in a loop to check and see whether something happened, Windows would notify us by sending our program an event such as a keystroke or a click of the mouse. In the meantime, our program could go about its business and perform other tasks instead of mindlessly waiting for a specific thing to occur. A good analogy is if I want to call you on the phone and the line is busy. I could stand by the phone and keep dialing every five minutes to see whether you've hung up, but that's not a good use of my time. Conversely, I could subscribe to a service that will call me the moment you hang up. Now I can do something useful, like cut the grass while waiting for you to finish chatting, and be notified when I can finally get through.

Even with all of these changes, user productivity soared and Windows was soon running on the vast majority of all desktop computers. However, in the mid-1980s, most computers were still running on only Intel 286, 16-bit chips. The influential book by Dr. Bjarne Stroustrup, The C++ Programming Language, was published in 1986 and officially kicked off the object-oriented programming (OOP) movement. At about the same time, the 32-bit 386 chip was introduced and started gaining popularity. C++ gained a foothold among developers in corporate America, and class libraries were built to help speed development with interacting objects.

While C++ is a superset of C, the language is a completely different animal because it is object oriented. Making the move to C++ from any non–object-oriented, procedural language required a large jump in thinking about the way programs are written. Programmers were now faced with thinking about programs in terms of self-contained objects and events instead of linearly executed code. They were forced to learn strange and terrifying new concepts with intimidating names, such as polymorphism, inheritance, encapsulation, and overloading. C++ was abstract, and most programmers found the learning curve steep. An average programmer required about 6 months just to become familiar with the new language. Because Visual Basic was relatively less complex, most programmers who could not make the conceptual leap to C++, or who simply wanted an easier and more productive language for Windows 3.0 development, successfully moved to Visual Basic when version 1 was introduced in 1991.

Visual Basic has a long and productive history. The Visual Basic language has not stood still as technological challenges have grown and changed. As the needs of corporate end users evolved, Visual Basic has kept pace nicely, becoming faster and providing better functionality with each new version. Visual Basic 3.0 added DAO (Data Access Objects) capability to seamlessly access databases and other data sources. Visual Basic 4.0 offered two separate and distinct compilers, one for 16-bit and the other for 32-bit development. Version 4.0 also let developers build programs based on the Component Object Model (COM) by providing the capability to create dynamic-link libraries (DLLs). Class-based programming also made its debut in this version. In Visual Basic 5.0, the language added the capability to build and distribute ActiveX controls. And Visual Basic 6.0, introduced in late 1998, was rewritten entirely and provided new Web controls and interface inheritance for classes.

Why am I telling you all of this? Because the programming world is changing once again, and this time the Internet has taken center stage. Andy Grove, CEO of Intel Corporation and, ironically, Time magazine's "Man of the Year" for 1997, would probably call this juncture in software development an "inflection point." In his excellent book, Only the Paranoid Survive, he describes an inflection point as a crossroads where the right path to take might not be clear at the time. If you make the correct choice at the inflection point you not only survive, you thrive. Make the wrong choice, and you join the ranks of those who said DOS would live forever.

The movement from traditional programming languages such as Visual Basic to Visual Basic .NET is considered by many, including Bill Gates, to be more dramatic than the shift from DOS to Windows. What was that Chinese curse? "May you live in interesting times"? More appropriate to programmers might be the French proverb, "The only constant is change."

Visual Basic .NET is not only the next version of the language, it is the next revolution of the language, and the language has fundamentally changed for the better.

From COM to .NET

As you probably know, Visual Basic and Visual C++ have separate run times, each with its own distinct behaviors. C++ revolutionized software development by making object-oriented programming widely used. However, C++ objects could only be used by C++ code, which didn't benefit most programmers; the majority don't use C++ as a result of the steep learning curve required to understand it. In order to solve the problem of interlanguage communication, Microsoft developed COM.

COM is really a contract, a set of laws that determine how to build a COM component. If your component follows the COM rules, it can work with other COM components no matter which language they are written in. When you added a new tool to the Visual Basic 6.0 toolbox, that .ocx file was probably written in C++, but you didn't need to know or care. You just wanted the functionality the control provided.

COM deals primarily with interfaces. Not graphical user interfaces, but application programming interfaces. In fact, a COM object is primarily composed of interfaces. If a component adhered to the COM blueprint for how to lay itself out in memory and provided a set of standard interfaces, other COM-compliant software could reuse the component.

COM acts as a binary object–interoperability standard. A COM software component is a piece of reusable software in binary form—a self-contained block of functionality such as a grid control or a text box. COM components register themselves in the Windows registry, and any program that knows how to find them can use them. While this approach has, for the most part, worked well, inadvertent version or interface changes by an unwary programmer have caused occasional problems. And when you wanted to spawn a process on a remote machine across the Internet, things rapidly became tricky.

Distributed COM, or DCOM, attempted to solve this problem. Unfortunately, DCOM is difficult to configure and set up and works only on Windows machines. In order to accept marshaled DCOM data, a port in the firewall had to be opened and therefore exposed to hackers. This requirement made more than one corporate IT manager raise an eyebrow. CORBA was developed as a competing but incompatible approach to interoperability in the UNIX world.

As the corporate world becomes more comfortable with the Internet and relies on it for mission-critical system needs, a better solution than DCOM is needed. Businesses all over the globe are setting up shop on the Internet at a breakneck pace. Electronic Data Interchange (EDI), the scheme born in 1985 that's used for ordering, invoicing, payments, and updating back-office systems, is experiencing a renaissance of sorts with the Internet. If business-to-business e-commerce is going to reach its full potential, a platform-independent and language-independent standard is needed. The .NET Framework provides the pieces to make this level of interoperability a reality.

The .NET World

The raison d'etre of .NET is to provide users with access to their information, files, or programs anywhere, anytime, and on any platform or device. Users should not have to know where the information is located or the details about how to retrieve it. For example, over the next several years Microsoft and other companies will phase out delivering software on CDs. Instead they will deliver the functionality that users get today from software installed on their desktops via Web Services delivered over the Internet. Consumers of those services will no longer buy software, install it on a machine, and then maintain it. Instead they will license the functionality as an on-demand service. The software bits will be downloaded, installed, and maintained by a Web Service. Updates and patches will happen automatically via the Internet. If you need to use a particular piece of software for a project, such as an expensive CAD/CAM program, but don't want to purchase it, you'll be able to use it via a Web Service and be charged by use.

As you can see, this is a large vision that considers the technology and business horizon in the not too distant future. Learning Visual Basic .NET not only puts you on the forefront of this exciting and revolutionary vision, but it also permits you to be more productive with today's applications. Bill Gates summed up things nicely when he told a group of developers recently, "Today, we have a world of applications and Web sites. In the .NET world, everything that was an application becomes a Web Service." Web Services means the sum is greater than the parts.

A .NET Example

I always find understanding a new concept easier with a concrete example. Let's say you're charged with developing a program that moves funds across different currencies for a financial institution. Designing this program requires that currency conversion rates be calculated in real time, which allows a customer to send a dollar-denominated wire transfer and have it converted to deutsche marks (DM) for payment to your branch in Berlin. The customer of your service would have to know, in real time, how many dollars are required to pay the bill of DM10,000 the moment the wire was sent. You could try to figure out how to create a real-time currency conversion program, but where would you start? How do you get the rates? What is the risk of getting it wrong? How timely is the information? And on and on.

Programmers might spend months or more just writing the specifications and the code for a program like this. Then there is testing and debugging, which will take another few months. My company, however, offers a Web Service that provides real-time currency conversion rates. My Web site provides a service, and your Web site consumes it. When a customer logs into your secure Web site, behind the scenes Visual Basic .NET code makes a request for conversion rates from my site using a protocol called SOAP (for Simple Object Access Protocol; more on SOAP later—for now, trust me). You don't have to know or care whether my site runs on Windows 2000, Solaris, UNIX, or whatever.

The .NET strategy permits us to do away with DCOM to get information from another computer platform anywhere on the globe in a secure and efficient manner. Your Web Service consumes my Web Service in a method that is completely secure and transparent to your customer. You might also sign up with another Web Service provider that offers the best arbitrage rates on currency and yet another provider that validates and stores customer security information. When your Web Service consumes several other Web Services to provide a customized product or service, that is known as a federation of Web Services. Your customer does not know that you are using a federation of Web Services, built and maintained by others, to provide your end product. By using various Web Services (secure, transparent, functionality providers), you not only outsource the development by using their code but also outsource the risk because the service providers have domain experts on staff and have the various government certifications and insurance. You simply consume their secure Web Services and add the results to your own service. This scenario is the ultimate in code reusability because it's on a global scale. Your development team is free to spend precious time and resources on developing applications to meet the needs of your customers by consuming Web Services from experts in various domains and assembling them in customized ways.

But let me stress that if you write any Windows software, Visual Basic .NET will continue to provide current and additional functionality. Your software will also be able to run on non-Windows platforms, very much like Java.

note

Most people know the Java language mantra "write once, run everywhere" from a few years back, referring to the Java Virtual Machine. After working with early renditions of Java, some developers used to joke that the phrase should really be "write once, debug everywhere." The designers of the .NET Framework certainly got the "write once, run everywhere" concept right.



Coding Techniques for Microsoft Visual Basic. NET
Coding Techniques for Microsoft Visual Basic .NET
ISBN: 0735612544
EAN: 2147483647
Year: 2002
Pages: 123
Authors: John Connell

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net