Objective 1.01: Introducing .NET


Microsoft .NET is Microsoft’s set of next-generation software technologies that allow people, computer systems, and devices to communicate with one another in ways never before possible. In many ways, .NET is more of a revolution than an evolution. In one giant step, Microsoft has changed the way software development teams think about how software applications are supposed to work.

Microsoft formally released the .NET platform in January 2002, primarily as a way to develop applications that can easily communicate over the Internet. In particular, .NET was designed to make it easier to develop a new type of application called web services.

Author's Note

Microsoft maintains an extensive collection of .NET-related resources and articles on its web site at http://www.microsoft.com/net. The site contains links to both business and technical resources. Microsoft also maintains a site for the .NET developer community at http://www.gotdotnet.com.

The .NET platform is made up of a collection of new and enhanced Microsoft technologies:

  • Visual Studio .NET An integrated development environment (IDE) for creating .NET applications

  • .NET Framework and .NET Compact Framework The runtime environments for building, deploying, and running .NET applications

  • .NET Enterprise Servers The Windows family of server operating systems and server products such as Exchange and SQL Server

  • Smart clients A range of devices that can consume web services and run .NET applications such as Windows XP or CE

In this section, we will examine why designing applications for .NET is different than designing applications under the Win32 API and COM programming models, and how XML web services represents a new way of designing distributed applications.

The .NET Paradigm

The dictionary defines a paradigm as a set of concepts, assumptions, and practices that constitutes a way of viewing a particular topic. Before the introduction of .NET, the common paradigm for developing Windows applications involved using technologies such as Component Object Model (COM) for software components, Distributed COM (DCOM) for distributed applications, Microsoft Transaction Server/COM+ (MTS/COM+) for managing transactions, Active Server Pages (ASP) for web-based applications, and ActiveX Data Objects (ADO) for database access.

This old paradigm for developing distributed applications was called the Windows DNA (Distributed Internet Applications Architecture). The Windows DNA was originally introduced as a scalable architecture for developing distributed web and n-tier applications. However, over the years, the needs of distributed applications have passed the limits of the original DCOM model defined more than five years ago.

The new paradigm for developing applications involves class libraries for software components, web services and .NET remoting for distributed applications, COM+ for managing transactions, ASP.NET for web-based applications, and ADO .NET for database access. (.NET remoting is a powerful set of .NET framework classes that allows developers to access remote applications through a variety of different methods. .NET remoting is briefly discussed in this chapter’s “From the Classroom” section.) Some of the technology may sound the same, but developers will see significant changes in the way these technologies are used.

On The Job

Over the past seven or eight years, billions of dollars have been invested worldwide in developing applications for the old Windows DNA infrastructure (COM and MTS). .NET has been available in beta form since late 2000 and as a final product since early 2002, but conversion to .NET has been understandably slow. This is primarily due to the amount of work and cost involved in upgrading existing systems and applications for hard-to-describe immediate benefits. In fact, many companies are still developing new applications under the old model because their .NET infrastructure is not yet in place.

The .NET platform solves many of the problems software developers face every day and should make developing software easier and more enjoyable. Figure 1-1 shows how applications connect to the operating system (and to each other) through the .NET platform.

click to expand
Figure 1-1: .NET acts as a middleware component between the application and Windows.

You will still find traces of the old DNA paradigm in .NET. In fact, .NET applications have the ability to connect and interact with COM components, which make transitioning to .NET easier. But the fact remains that the COM/DCOM model of programming has definitely been replaced in the .NET world.

What Was Wrong with COM/DCOM?

The old paradigm was based on using Microsoft’s COM object model for interprocess communication. There was nothing inherently wrong with using COM objects to create distributable components. In fact, they were quite easy to create using a language that did most of the work for you, such as Visual Basic—let’s not talk about C++, though. They were also quite easy to use.

Of course, there were some circumstances where COM was not particularly well suited. For instance, calling an object that resided on a different server across a network was not easy to set up. And if the network was slow and unreliable (such as the Internet), it was impossible.

Also, despite COM’s popularity, it remained a Microsoft-only technology. An attempt to port DCOM to Unix did not go very far. Because more than 70 percent of the web servers on the Internet use operating systems other than Windows (such as Unix), this meant another solution had to be found.

Why Is .NET Better than COM/DCOM?

In some ways, comparing the Windows DNA architecture to .NET is like comparing apples to apple trees. Microsoft has spent considerable time and effort attempting to improve the good things about the old platform and fix some of the fundamental problems. For example, developers working in .NET can still develop COM components, but they are not limited to COM. That allows you to choose better ways of developing components, such as class libraries and web services, depending on the circumstances.

One of the biggest problems with the DCOM method for distributed application development is that the client computer needs to be specially configured to connect to the server component. This involves setting up proxies, stubs, and registry settings on the client machine. This configuration is not just necessary for the first remote component, it is required for each remote component. In short, the complex deployment model for DCOM applications is this method’s biggest weakness.

Using an XML web service, on the other hand, requires no special client configuration. Since web services are generally accessed over the Internet using standard technologies like Hypertext Transfer Protocol (HTTP), Extensible Markup Language (XML), and Simple Object Access Protocol (SOAP), most clients already have all that is required to connect to a web service; and adding additional web services requires no additional configuration on the client side.

There is also the problem of security and firewalls when trying to develop an application over a public network such as the Internet. Most systems administrators allow open access to only a few ports through the company firewall. All other ports are generally blocked for security purposes, which means applications that use these nonstandard ports will not operate from behind the firewall.

Employees of large corporations experience the effects of this when they attempt to use applications such as MSN Messenger and ICQ. These applications have trouble making it past secure firewalls. However, if those applications were instead designed to access the network using XML and SOAP over HTTP, they would generally have no trouble accessing the Internet.

COM components have some other shortcomings as well. For instance, it is very difficult to implement versioning in COM. Versioning is a technique that is used to allow more than one version of a particular software component to be installed on a computer at the same time.

In COM, trying to provide an upgrade to a widely used component will likely cause many applications that rely on that component to break. It is often better to create a completely new component, with a new name, than attempt to upgrade an existing one. Even Microsoft was prone to this versioning headache, as evidenced by the many versions of ADO it has created over the years—1.0, 1.5, 2.0, 2.1, and 2.5. The mixture of system files required to support this many versions on the same machine is evidence of the problem that versioning created in the past.

.NET provides a much better way to implement versioning. The two key features that support this in .NET are application isolation and side-by-side execution. Application isolation is the idea that installing a new application on your machine should not affect an existing application. For example, with application isolation, if a new application installs version 2.0 of some important component, other applications that rely on version 1.0 of that component will not be affected.

Side-by-side execution is the concept that those two versions of the same component can be installed and functioning on the same machine. If you want to upgrade your existing applications to use a new version of a component, you must manually configure them to do so. Otherwise, the default behavior is to continue to use the old version.

What New Features and Benefits Does .NET Provide?

We have already seen how .NET compares with the old Windows DNA way of designing and developing applications. In addition to the improvements in that area, .NET also provides developers with several important new benefits and features:

  • A fully object-oriented programming environment, including the mandatory use of .NET Framework

  • A Common Type System (CTS) instead of language-specific data types

  • An environment where memory and other system resources are managed by the system and not by the application

  • An environment where all the programming languages can interoperate seamlessly

  • An environment where all programming languages have similar runtime performance

  • With Windows Server 2003, the ability to turn existing COM components into web services without having to recompile them

  • The ability to add .NET Framework components to a COM+ application to take advantage of transactions, object pooling, queued components, and events

  • Application partitions, which allow you to have two or more copies of the same application to run on the same computer with different configurations

  • An escape from “DLL hell,” allowing different versions of the same dynamic link library (DLL) to run on the same machine

Of course, there is no application environment that can be all things to all people. There will always be developers who feel more comfortable working in a C++ environment, just as there are developers today who still write programs directly in machine code. Any programming language or environment that makes things easier for developers also often restricts them a bit. For example, a C++ application that runs directly on the operating system is generally responsible for allocating memory its own data storage area in memory. When it has finished using the memory, the application is supposed to release the data storage area for other applications to use. Sometimes an application mistakenly forgets to release memory it no longer needs, a condition known as a memory leak.

When a .NET application attempts to use a component for the first time, the .NET Framework allocates and assigns memory for that component. When the application has exited or indicates that it no longer needs the component, the .NET Framework knows that it can release the memory used by that component. This process runs automatically in the background and is called garbage collection.

Garbage collection happens automatically behind the scenes and cannot be called directly by .NET applications. If you are creating and destroying objects rapidly— thousands of times per second—your application could consume an enormous amount of computer memory before the garbage collection kicks in. There is no way to avoid this with managed code.

There will be some developers who still prefer to allocate and deallocate their own memory, not wanting to rely on the system to handle those functions. Such developers can continue to create unmanaged code (such as traditional COM components) that can interoperate with .NET applications, marrying the old technology with the new.

COM and .NET Compatibility

Despite the obvious shift from creating applications in the old Windows DNA environment to creating applications in the new .NET environment, Microsoft realizes that many companies have large investments in COM components and applications. As we will see, COM components and .NET assemblies are fundamentally different technologies, but there is a way to make the old and new technologies work together.

.NET applications can call COM components as long as those components have been converted using a runtime callable wrapper (RCW). As its name implies, an RCW allows the .NET runtime—the Common Language Runtime (CLR)—to call the COM object inside the wrapper. The COM object will still run outside the .NET environment as unmanaged code, but the .NET application will treat it like any other .NET-compatible component.

In addition, 32-bit Windows applications can call .NET components, as long as these components have been converted into COM objects by using a COM callable wrapper (CCW). The CCW allows .NET components to be hosted inside a COM- compatible environment, such as Visual Basic 6.0 or MTS/COM+.

Things to Be Aware of when Choosing .NET

Choosing to convert an existing application to .NET is not a trivial task. First, you cannot expect to be able to create a .NET application using the same programming language you used to create the old application. More than likely, the syntax of the language you were using has changed quite a bit due to the new .NET way of doing things. For instance, it is often said that Visual Basic .NET (or just VB .NET, for short) is almost a totally different language than its Visual Basic 6.0 predecessor. Choosing to upgrade an existing application to .NET is not just a matter of recompiling— code changes will definitely be required.

Second, applications need to be modified to use .NET Framework classes to perform certain tasks. For instance, instead of using the Microsoft XML parser (MSXML) COM component, applications will instead use the classes provided by the System.XML namespace. In order to use text boxes and push buttons, the application will need to use the Windows controls provided by the System.Drawing namespace. In many cases, this means a lot of code needs to be rewritten.

Third, support for some of the older Windows development technologies has been dropped or drastically reduced in .NET. Dynamic Data Exchange (DDE), Data Access Objects (DAO) and Remote Data Objects (RDO) data binding to controls, Visual Basic 5.0 controls, ActiveX documents, user controls, and web classes are not supported in .NET. Applications that rely on add-ins or third-party components will need a significant amount of rework, as will applications that call the Win32 API functions directly.

Because applications are likely to require some rework under the new .NET architecture, it may be wise to examine the design of the application as a whole and see if changes need to be made at that level. For instance, you may find that an application that was originally created as one large executable might be significantly better if broken into two pieces—one that runs on a client PC and one that runs on a server.

On The Job

Microsoft provides a conversion tool to help Visual Basic developers convert to .NET, but it only takes you part of the way. You are almost better off developing programs in .NET from scratch instead of trying to convert them.

Of course, this section was not meant to scare you away from converting to .NET. On the contrary, the platform simplifies application development and improves the performance, reliability, and security of applications that run inside its environment. But anyone embarking on a .NET conversion project should be aware that the gap between the current programming methodologies and the new .NET methodology is wider than gaps created when previous new technologies were introduced.

Introduction to Web Services

Companies have been developing applications over the Internet since the mid-1990s. In the early days, applications were quite limited in what they could do. Back then, web applications used the Common Gateway Interface (CGI). CGI applications, as they were called, were written in server-side languages, such as Perl or C, which ran directly on the operating system with no restrictions. These applications had no help, in that developers had to code their own way to communicate with the web browser, store user data, and handle user sessions.

A short time later, web application environments such as ASP were developed that provided developers some standard ways to communicate with the browser (for example, the Request and Response ASP objects) and handle user sessions and data (for example, the ASP Session and Application objects). When these ASP applications were integrated with other technologies such as ADO, MTS/COM+, and even custom-designed COM components, developers could create some powerful and dynamic web applications.

Figure 1-2 shows the way a typical ASP/COM web application is structured. The ASP application uses COM components that provide essential services, such as data access to the application.


Figure 1-2: A typical three-tier web application without .NET

These types of applications worked well for users who used web browsers to access them. For example, let’s assume a web-based e-mail client exists at http:// www.example.com/mailreader. Users can use their browsers to log in to that application and read their e-mail. The problem is that you can’t easily use Microsoft Outlook, or any other 32-bit Windows application, to read e-mail from that location. The Hypertext Markup Language (HTML) application is a great delivery system to deliver e-mail to browsers but terrible for delivering e-mail to other pieces of software.

One of the solutions for this problem that has been developed in recent years is the concept of web services. The term XML web services describes an industry standard way of integrating applications using XML, SOAP, Web Services Description Language (WSDL), and Universal Description, Discovery, and Integration (UDDI) over the Internet. Practically speaking, a web service is an application that is accessible over the Internet using XML. This makes it easy for other applications to use that service in their own applications.

On The Job

XML, SOAP, and WSDL are all standards created and maintained by the World Wide Web Consortium (W3C). You can read the official specifications on its web site at http://www.w3.org/. UDDI is a standard maintained by OASIS, at http://www.uddi.org/.

Figure 1-3 shows how the typical web service application is structured. The web service can interface with many different applications and systems.

click to expand
Figure 1-3: Web services can accept SOAP messages from almost any platform.

There are many web services already available on the Internet today, such as the following:

  • Obtaining stock quotes

  • Searching the Amazon.com product database

  • Searching the Google web index database

  • Feeding topic-specific news headlines

Developers can integrate these web services into their own applications and web sites. And since XML web services is an industry standard, it doesn’t matter what operating system or platform you use to either develop or consume these services.

On The Job

Applications that use web services are said to consume them.

The following industry standards are often associated with web services:

  • XML (Extensible Markup Language) A flexible and powerful new way of formatting data using markup tags

  • SOAP (Simple Object Access Protocol) An XML-compatible way of sending and receiving messages across the Internet

  • WSDL (Web Services Description Language) An XML-based language used to describe a web service

  • UDDI (Universal Description, Discovery, and Integration) A registry that lists business and web services on the Internet using WSDL

Web services hosted in an ASP.NET environment can easily be tested using a web browser. Figure 1-4 shows an example of a web service being manually tested. Of course, this is not the way the web service itself will be used by the users, but being able to easily test the application using a simple web browser is a convenient tool for developers.

click to expand
Figure 1-4: The ASP.NET testing interface for web services

start sidebar
From the Classroom—.NET Remoting

.NET remoting can be used to enable different applications to communicate with each other, regardless of whether they reside on the same machine or on different machines across a network. Remoting even allows applications running on different operating systems to communicate with each other, which is what gives .NET an advantage over Distributed COM (DCOM).

Remoting is a very flexible mechanism for application communication, since it can be configured to use binary encoding for when performance is critical or XML encoding for interoperability among different systems. Remoting has security features and can be transmitted over different channels. Objects that can be serialized can be accessed by .NET remoting, which overcomes many of the performance problems introduced by object marshalling. Marshalling is the process by which an object’s code and data is packed together, transmitted over a communication channel, and unpacked at the other end into a format that the receiving system understands. Excessive marshalling of objects can be a serious application bottleneck that slows down application performance.

The .NET Framework handles most of the work related to remoting by providing framework classes for channels and formatters. In order for an application on one computer to access a component stored on another computer, all it has to do is create a new instance of the remote component. .NET will create a proxy object on the client computer for your application to use. When you access a method of the local proxy object, .NET will send the call over the channel for the server to process. All of the communication aspects are hidden from the application, as if the object existed on the local machine all along.

—Scott Duffy, MCSD, MCP+SB, SCJP, IBMCD-XML

end sidebar




MCSD Analyzing Requirements and Defining. NET Solutions Architectures Study Guide (Exam 70-300)
MCSD Analyzing Requirements and Defining .NET Solutions Architectures Study Guide (Exam 70-300 (Certification Press)
ISBN: 0072125861
EAN: 2147483647
Year: 2003
Pages: 94

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net