It s Only Logical

It's Only Logical

Before I get into specific services for a robust solution, I'll talk in general terms about tiers and layers.

Choosing the Correct Architectural Model

There are always a few questions you should ask yourself, regardless of the requirements or the solution:

  • Should this be a Web application or a Windows application?

  • How many tiers should I use?

  • Do I need to interact with any legacy or third-party applications?

Sometimes the answers are obvious, sometimes not. The following sections delve into these questions in more detail.

Windows Application or Web Application?

The choice between a Web application and a Windows application often boils down to the requirements for the user interface. These requirements are discussed in more detail in "Choosing Between Web Forms and Windows Forms" later in the chapter, but for now, take a look at the other requirements that might affect this decision.

If you have a broadly dispersed client base, as in an e-commerce site such as Amazon.com, an online static content site such as Merriam-Webster.com, or an online interactive service such as Google.com, you are almost assuredly looking at a Web solution. Although some companies, such as AOL and CompuServe, have been able to distribute a client-side piece of software that works in concert with their online services, this setup is not typical (or simple to manage). Even in a corporate environment, a Web-based solution is often best, although it's running across a secure intranet instead of the wide-open Internet.

Apart from requirements for user interface features, the other main factor in choosing between Web or Windows is the expected deployment issues. This area is where things get interesting. In the Distributed interNet Applications (DNA) technology days of COM, deployment was such a risk factor that Web applications grew in popularity because they were, in effect, zero-administration deployments. You loaded the latest software on the server, and the new features or bug fixes were immediately in production. On the other hand, the COM world was rife with the "DLL Hell" problem, in which working software could be hammered at any time by a user loading a new update off the Web.

The .NET Framework solves many of these problems. First, it allows for "side by side" deployment of DLLs. Because disk space is not the issue it once was, there is less need to have only one DLL to service all applications on a machine. Now every application can have its own version of the same DLL, tuned and tested specifically for that application. Second, the .NET Framework supports "just in time" replacement of application components (sometimes referred to as "trickle" deployment), using versioning and a specified URL as the way to update components. This method enables you to deploy a new release to only one location and have the rest of the deployment details handled automatically by the .NET Framework. There has also been a change to the deployment model for .NET Windows applications, which is discussed more in Chapter 10, "Deploying and Maintaining the Application."

One other factor that shouldn't be overlooked is security. Properly securing a Web application from all types of attacks can be daunting. Having said that, the .NET Framework makes it much easier than it was under the "classic" ASP architecture. Windows applications, by their very nature, are easier to secure because they are usually not exposed to the outside world, beyond the corporate firewall.

Finally, don't forget to consider a hybrid solution, in which privileged users with full rights are running under Windows, and the read-only or limited-access users are running a less powerful version under the Web browser. These two User Services tiers could be connected at the Business Services Layer so that they share data and business rules. A good example might be setting up a Windows client for users who do heavy data entry to take advantage of the richer user interface, and creating a Web client for people to view the data or run reports.

XML Web Service?

Do you need a user interface at all, or is this solution two software programs talking to each other? Note that as in the previous section, a hybrid solution, with a Web application and a Web service both in scope, is often the proper solution in real life and on the exam. In ".NET Remoting Versus XML Web Services" later in this chapter, I compare using Web Services or .NET Remoting within a solution. This section covers two distributed, unrelated systems interacting.

How Many Tiers?

Even though n-tier architectures are the most common by a wide margin, you should, in real life, consider your decision before proceeding. Figure 7.1 shows you what this architecture typically looks likes.

Figure 7.1. Standard n-tier architecture at 50,000 feet.

graphics/07fig01.gif


In my experience, the Microsoft exams almost always focus on a logical n-tier solution, but take a brief look at your options. In his September 2001 article "Learn When to Use N-Tier Designs" in Visual Studio Magazine (http://www.visualstudiomagazine.com; p. 82), Rocky Lhotka discusses five aspects of a design that might cause you to choose a simpler architecture than n-tier. These are the dimensions he describes:

  • Application size Is the application large enough to require multiple logical tiers?

  • Concurrent users Are there enough concurrent users for scalability to be a concern?

  • Timeline pressure Do you have the time to do the more complex n-tier design?

  • Staff and skill set Do you have the expertise to do a quality job of designing a logical n-tier solution?

  • Future requirements Do you anticipate extending this application in the future so that it would be a worthwhile investment now to build it in layers?

The article cautions against making n-tier the correct architecture in all cases. Barry Bloom makes a similar argument in his March 2002 article "Adapt Your Web Architecture for .NET" in .net Magazine (http://www.thedotnetmag.com; p. 32).

Most of you have built single-tier or two-tier applications. Although I never found an occasion to use the Visual Basic bound controls, I did create several applications using Access Basic (in the early 1990s) that I would have to categorize as single-tier or two-tier.

There are a few more reasons for designing in tiers or layers. If you split your application up logically, there is more opportunity for reuse. This is proved by some of the reusable application blocks Microsoft offers for download (more on this topic in "Exception Handling," later in this chapter). Also, the use of layers enables you to physically separate your application, which makes scalability and performance-tuning through the use of hardware much easier. For more on designing in layers, see Chapter 9, "Creating the Physical Design."

graphics/alert_icon.gif

It seems unlikely that you would see a call for a single-tier or two-tier application on the exam, but for the sake of thoroughness, I've addressed these options very briefly in this section.


Are You Required to Interface to Other Software?

A common requirement, both in the real world and on the exam, is the need to interface with a legacy piece of code (often represented in a case study by the mention of a mainframe program) or a database. I'm going to use the liberal definition of legacy, which basically refers to any software that exists before your own solution being deployed.

At other times, the code is a black box piece of software, in which the interface and internal functionality are beyond your ability to control. This might represent a piece of vendor software that your new solution is required to interact with.

In almost all cases, planning for a logical module to serve as the interface between your own code and the foreign code is wise. This method not only better protects you from future changes (for example, retiring the legacy code), but also better compartmentalizes your work so that only a few developers need to understand how the other software works. When the communication between your own code and the legacy code is totally within your control, you might want to consider a message-based service interface. More on this later in the chapter in "Services and Components."

Although custom code is often the correct response, several Microsoft products can be appropriate in certain circumstances. For example, BizTalk Server provides the capability to seamlessly integrate disparate systems and platforms by allowing for the transfer (and transformation) of dissimilar data and/or processes. Direct from the marketing literature, its five main functions are document transport and routing, data transformation, application integration, process automation, and scalability and manageability services. BizTalk acts like a large adaptor, allowing businesses to share information through the use of transformation services. In fact, new BizTalk adaptors are being created all the time, making it easier for you to interface with software suites, such as SAP, PeopleSoft, or Siebel. BizTalk should be considered for a solution when the other system is outside the network boundaries of your enterprise.

Component Object Model Transaction Integrator (COMTI) is a product that simplifies the interaction between Microsoft technologies and the transaction-based Customer Information Control System (CICS) in the IBM mainframe world. It serves as a proxy layer or abstraction wrapper between SNA Server or Host Integration Server so that Microsoft programmers can avoid having to write their own interfaces to the mainframe. For example, COMTI could interact directly with IBM CICS transactions, using the LU 6.2 protocol.

The other technology that enables you to "glue" two unrelated systems together is XML. More on that later in the chapter in "Neither Here nor There."

Before you move on, there is one other point to be made. Although you've likely seen diagrams like Figure 7.1 many times, it is becoming increasingly common to see standard services, or frameworks, represented by vertical boxes running through the entire solution. Items such as security, exception handling, and caching should also show up on a high-level architectural diagram. Going from 50,000 feet down to 25,000 feet (still pretty high) in Figure 7.2, you can begin to see more detail.

Figure 7.2. A look at n-tier architecture at 25,000 feet.

graphics/07fig02.gif


graphics/alert_icon.gif

I can't be sure Microsoft would be this tricky on the exam, but it's useful to note that when Microsoft uses the word tier, it is typically referring to a physical tier, and the word layer denotes a logical layer. I suppose the industry-wide term n-tier would be an exception to this rule. Keep that in mind during the exam. I have tried to use the terms correctly in this book, but at times it seemed more appropriate to use the phrase logical tier.


Auditing and Logging

The .NET Framework offers many features to help solution architects design auditing and logging into an application. Almost all these features are new or have been reworked so as to be much easier to use than in the past.

It has long been my philosophy that expecting to be able to re-create a user problem to troubleshoot the software is a weak strategy at best. Error logging, tracing, and auditing code need to be "baked in" to your application. Although practically unattainable in real life, my goal has always been to design the code in a way that one crash would provide enough information to isolate the defect to a specific trouble area, including environmental issues. This is done by a combination of monitoring code, logging code, trace code, and a good exception-handling strategy. Exceptions should leave a clear trail of bread crumbs for the developer to follow. Expecting to add this support at the end of the development cycle is impractical. It needs to be mapped out and built in right from the beginning. What does .NET offer to help you in this effort?

Tracing and Logging Classes

Tracing is the practice of inserting statements in your code that make normally hidden, abstracted details about the running software visible, usually in a log file or the system's event log. Tracing is done through the System.Diagnostics namespace.

You record breakpoint information so that you can accurately reconstruct what occurred at the time of the error. Pieces of state information, such as user ID, time of day, or the SQL statement or stored procedure that was being carried out, are all useful in your troubleshooting efforts.

In the past, at least for Visual Basic programmers, you had to manually construct, in code, a call stack listing each method that got you to the line of code where the error occurred. In .NET, this procedure is available free in the System.Exception objectone less item you need to trace.

A useful feature of tracing is that it allows for levels. Although you might want a lot of detail in the early development stage, in later development or in testing, you might want less information. In production, you might want even less, or no information, in your trace stack so that the application runs as fast as possible. You can control trace levels (or even turn tracing off) in the application's config file or through a Registry setting. Either method enables you to change the level without having to recompile the application.

The Trace class and its cousin, the Debug class, are two more tools in the defensive programmer's toolkit. The main difference between these classes is that Debug calls only login debug builds of your component, but Trace calls occur in the release build as well. Debug calls and trace calls are collected by listeners. This is the logging part of the equation. The .NET Framework has built-in listeners for writing to a text file, to an event log, or to the Debug window in the Integrated Development Environment (IDE). Of course, as you would expect, you can write your own listeners to output to other destinations, such as a database, a User Datagram Protocol (UDP) packet, and so on.

graphics/note_icon.gif

You don't need to explore all the details concerning tracing and logging; just be aware that any good application must include these features to assist in maintainability.


Because errors in your code can crop up in any layer of an application, it is important to have tracing and logging services in all your logical layers. The only exception might be browser-based errors on the client side of a Web application. In that case, you don't have the permissions or infrastructure in place to guarantee access to the event log or the file system.

You can store your information in the Event Viewer for retrieval later. Figure 7.3 shows an example of what several application errors might look like in the Event Viewer. Each error can be expanded to show more detail. With the proper permissions, you can even examine the event logs of other machines on the network.

Figure 7.3. The Application event log.

graphics/07fig03.jpg


graphics/tip_icon.gif

If you're interested in more information, Dan Appleman has released an eBook specifically about tracing and logging in .NET at his site: http://www.desaware.com.


Auditing

When I first see the word auditing, I think of database auditing, in which every update, insert, or delete is sent to a log file. However, I assume that Microsoft means much more than this when referring to "auditing." New in .NET is the capability to monitor, or watch, any changes to any key files on the user's machine. This is done from the System.IO.FileSystemWatcher class. In this way, the developer has more finely grained control over any aspects of the environment surrounding the application. With much more of the configuration and persisted metadata stored in XML, human-readable files, this could be an important set of files to monitor.

Auditing, creating a persistent record of noteworthy user actions, is an important aspect of any security strategy. As it relates to case studies and exams, any time you see an expressed need to track what a user does, consider an auditing module as part of the solution.

Windows Management Instrumentation (WMI)

It seems reasonable to expect a question somewhere on the exam that refers to Windows Management Instrumentation (WMI), so this section covers it briefly. WMI is the Microsoft implementation of the industry-standard Web-Based Enterprise Management (WBEM) and Common Information Model (CIM). In .NET, WMI is accessible through the System.Management namespace, and it uses a publish-subscribe pattern that is fairly well known to application architects. Its purpose is to allow the monitoring and control of distributed machines and distributed applications. This process is sometimes referred to as "health monitoring."

Basically, .NET can establish one or more managed events, called instrumenting an application, that trigger actions in the WMI consumer. The final destination could be just about anywherea database, the Registry, an event log, a packet to another machine, and so forth.

WMI could, for example, detect an attempt to access a system without knowing the correct password. Typically, this event is defined as the point when a password has been rejected three times. A more typical use for WMI might be performance metrics, health checks, significant events, or even severe error notifications (application crash). Any event in your code can generate a call to WMI, however, if you set it up to do so. WMI events can be requested as exceptions occur or at a regular time interval. In the past, this was known as an "I am here" ping.

Although exception handling could easily have been included with the coverage of WMI events, it is such a large topic in its own right that it is discussed separately in the following section.

Exception Handling

Without getting into the specifics of how Try-Catch works (that topic won't be in this exam) or why it is better than the unstructured error-handling of Visual Basic 6, this section explains in general how you would develop an overall strategy for exception handling in your solution.

A good exception-handling strategy has three main goals:

  • Detect exceptions Don't allow exceptions to linger, causing the application to silently degrade without proper warning or to crash unceremoniously without any warning or indication to the user of what just occurred.

  • Log exception details Record enough information so that a developer can retrieve it later and have enough data to reconstruct what happened.

  • Notify external sources about errors For severe or interesting errors, a fire alarm of sorts can be useful for monitoring remote users or unattended server applications. A server crash that goes unnoticed can be a disastrous event in many (most) solutions.

In all the applications I've helped design, I always made it a goal to include quality exception handling so that just a few error instances were enough to track the error to its source. I also included various monitoring capabilities in my exception-handling strategies so that error details were reported to the maintenance team assigned to troubleshoot a problem. Deadline pressures often put pressure on the team to take shortcuts in exception handling, but there are few places in the application where the payoff is larger. Good exception-handling logic (and tracing code, discussed earlier) can help solve a problem in hours instead of days (or even weeks).

A final note on exception management: This is one of the two existing application blocks that Microsoft has made available to all developers, production-ready and royalty-free. "Application blocks" is Microsoft's term for actual code modules you can use "as is" or modify as you like. The source code comes with the download, in both C# and Visual Basic .NET versions. Others are planned for the near future. Watch the MSDN Library's "Patterns and Practices" area for further developments (http://msdn.microsoft.com/practices).

Internationalization, Globalization, and Localization

What nationalities and/or languages are you required to support with your solution? Chapter 3, "Gathering and Analyzing User Requirements," discussed how creating an international solution would affect the requirements. When determining the logical design of your solution, you need to figure out how to implement those requirements. For your purposes, globalization and internationalization can be used interchangeably. Basically, the terms refer to rendering your user interface in a way that is, to a greater or lesser degree, native to the user's country.

The .NET Framework has opened up the capability to support multiple languages in a way never seen before in Microsoft's previous software development platforms. The entire System.Globalization namespace is filled with goodies to make it easier to support a user interface (UI) that adapts to international users. In just a few lines of code, you can add a substantial amount of support for multinational users.

These are the items that usually need to be addressed:

  • Text The words themselves need to be translated for each language to be supported. .NET supports this task with the concept of satellite files (*.resx) that can be loaded dynamically.

  • String manipulation String comparisons and sorting code need to account for all languages supported.

  • Font Without a font change, text cannot be displayed properly in many languages. The correct font is crucial to making the text readable to your audience.

  • Currency Different currency formats and different types of currency (dollars, pesos, yen, euros) need to be displayed differently. Don't forget about currency exchange rates when dealing with e-commerce!

  • Numbers Different countries and languages display comma separators and decimal separators differently. Your application needs to account for these differences in the display and in the handling of any numbers.

  • Dates There are many different date formats throughout the world. In one country, 5/10/2002 is May 10; in another it is October 5, and in others, it is a meaningless string.

It might seem that internationalization and localization are confined to the user interface, but if you ever try to query a database or compare two dates and assume that everything is formatted in the English standard, you can easily create a mess. (Trust me on thisI've seen it and it can get ugly.) Logical components for international support need to exist at all levels of the application.

Localization is more specifically defined as separating your code from your data in such a way that the same code base can support all languages. If your solution or case study has any expressed requirements to support an international user base, consider creating a localization module and having it serve all layers of your application. For more on this topic, search MSDN for any of the following keywords: "globalization," "internationalization," or "localization," or for the phrase "Planning World-Ready Applications." Nick Symmonds's book Internationalization and Localization Using Microsoft .NET (see the "Need to Know More?" section at the end of the chapter for details) is also a good resource.

Accessibility

The 70-300 exam will likely include one or two questions on accessibility, which refers to your solution's capability to be used by people with one or more disabilities. This topic was introduced in Chapter 4, "Gathering and Analyzing Operational and Infrastructure Requirements."

Within the .NET Framework, the System.Windows.Forms.Accessible namespace has embedded support for disabled users. The .NET Framework also provides support for the ShowSounds property. ShowSounds, intended as an aid for deaf or hearing-impaired users, is a system property that can be turned on or off from the Control Panel. If it's set to True, the application should display visual equivalents for any application sounds or speech. Additional information on ShowSounds is available at the MSDN Library: User Interface Design and Development > Accessibility.

To make your solution more accessible, there are documented guidelines you can follow, which are part of the Certified for Windows program. Five basic requirements should be considered during UI design:

  • Support system standards for font, size, color, and input.

  • Support the Windows High Contrast option (a Control Panel setting that establishes strong contrast colors between foreground and background visuals).

  • Support and document keyboard access to all application features.

  • Provide obvious notification of keyboard focus (which control currently has focus)

  • Do not convey information by sound alone.

For more information, search MSDN using the phrase "Microsoft Active Accessibility" or go to the MSDN Library: .NET Development > Visual Studio .NET > Product Documentation > Developing with Visual Studio .NET > Designing Distributed Applications > Planning Distributed Applications > Designing Accessible Applications.

Security

Discussions about security almost always fall into one of three areas (discussed in more depth in the following sections):

  • Authentication Who are you and can you prove it?

  • Authorization What rights do you have within the application space?

  • Encryption Guarding sensitive data when transmitting it from place to place or when storing it.

Authentication

Authentication is how the software verifies that the users are who they claim to be. The user need not be a human being. The user could be some other piece of software, even another class, or a different company accessing your solution through a Web Service.

Typically, three main authentication schemes appear on Microsoft exams:

  • Windows authentication

  • Forms authentication

  • Passport authentication

graphics/note_icon.gif

The .NET Framework also supports other authentication schemes, such as Basic and Digest authentication, but they do not seem to be in the mainstream, as far as coverage, and are unlikely to be on the exam.


Windows authentication is applicable only with a corporate (internal) network or Intranet. It relies on a trust relationship between the application and the Windows domain server. The premise is that if the NT Domain says it has verified this user, and since you trust NT Security, you make the assumption that the user is who he says he is.

You can specify within the config file (web.config or appname.config) whether you want the application to impersonate the logged in user, basically running under whatever privileges that user has.

Forms-based authentication is more appropriate to Web applications, where Windows domains are non-existent or irrelevant. Although the developer is left to implement forms authentication pretty much however they want, ASP.NET has built-in support through the FormsAuthenticationModule class (IHttpModule interface).

In the web.config file, you can specify that anonymous users (unauthenticated users) be redirected to a LoginUrl (that is, login.aspx). Then, after authentication is successful, they are returned to the page that was the original target URL. An important point here is that if you have no security issuesin other words, if anyone can access your siteyou can specify that anonymous users are allowed access. This is done in the web.config file and by setting up IIS to allow anonymous users.

It is essential to retain the user's authentication information so that you retain this bit of "state" for the duration of the user's time on your Web site.

Forms authentication in a Windows application architecture is typified by the standard Login pop-up form, where the user ID and password are entered and then compared to some data store. I think it's safe to say this model is well known to all.

Finally, Microsoft Passport authentication is based on your application interfacing with Microsoft's public Passport service. You redirect the user to the Passport login URL and are then notified whether the login was successful. There is, of course, an SDK you can download to assist in setting this up (http://www.passport.com).

Sometimes it's not a user that you want to authenticate, but a piece of code. Signing code with the Microsoft Authenticode technology is a way to verify that a piece of code you download is from who it says it is from, and that it has not been altered in between. You can find more on this topic in the MSDN Library, under the "Security" hive.

Authorization

After you are sure who the user is, ask yourself another question: "What actions is the previously authenticated user allowed to perform?" Authorization is the answer to this internal question.

It is customary to store the fact that the user has been authenticated somewhere (in session state for Web applications and in the User Services Layer for Windows applications) so that you don't go through the process each time a different form is presented. This stored information makes authorization faster.

If you are in a Windows environment, you can use the Access Control List (ACL). This process is known as resource-based authorization. It can be based on a user's specific ID or on a domain user group to which he belongs. Access rights can control the user's ability to create, delete, read, and update objects, to name a few.

Rather than have each individual user authorized for each part of your application, it is more common, both in Web and Windows applications, to use role-based authorization. In role-based security, after you have identified the user, you determine his or her role: manager, administrator, anonymous user, and so forth. This role usually determines what the user can or cannot do. Role-based security is supported in .NET by Enterprise Services (see the "COM+"section later in this chapter). For more detail, search MSDN for "role-based" + "security" + ".NET."

When working with a Web application, you should make sure to set up your application to ensure that the user is reauthorized on each Web page. This could be done with code or with settings in the web.config file. This setup keeps users from entering your application at midstream (by typing a URL directly into the address bar) and gaining access to a page they would normally be barred from.

In a Windows Forms application, authorization is much easier. You can create a role-based security system and pass the user's role code around (or store it as state information). Each transaction (create, read, update, delete) can verify that the user is allowed to perform the requested action before moving forward. Usually, the user is never allowed to get that far. Good programming of the User Services Layer requires you to disable any buttons or menu options that the user cannot use. In this way, you proactively avoid any chance of unauthorized actions.

graphics/alert_icon.gif

Microsoft, IBM, and others are jointly developing a specification called WS-Security, which is a collaborative initiative to make Web Services more secure. Microsoft has created an umbrella term for all related technology: Microsoft Global XML Architecture (GXA). Although this topic seems too new to appear on the exam, it might be worth a little snooping around inside MSDN. Searches for "GXA" or "WS-Security" will get you to more detailed information.


Encryption

Without going into details about the various encryption algorithms, solution architects should note any parts of the application where sensitive data is exposed to daylight. Any time you send data across the Internet or store it in a database, you need to consider encryption if there is any chance that others with malicious intent could read or alter it. Data privacy is a way to keep unauthorized people from reading your data or changing it without your knowledge.

graphics/note_icon.gif

Both the Windows operating system (Windows 2000 and later) and SQL Server offer encryption capability.


The most common Internet technology for secure communication is Secure Sockets Layer (SSL). SSL has been around for some time and is the encryption strategy Microsoft recommends. One warning: SSL can have a negative impact on performance across the Internet, as the encryption and decryption effort takes time. It also has the side effect of reducing your configuration options. You can spot SSL usage by seeing https displayed in the address bar instead of just http.

As you are probably aware, in early 2002, Bill Gates issued the now-famous "Trustworthy Computing" memo. From that single memo came a significant shift toward securing all Microsoft products, including those developed with .NET. One book that was required reading within Microsoft at that time is Writing Secure Code, by Michael Howard and David LeBlanc (see the "Need to Know More?" section for more details). It's more than 700 pages, but if you are a serious solution architect, I recommend reading at least Part I, about 125 pages. A prominent concept throughout the book is threat modeling, in which you explicitly step through your solution, searching out weak points in the security model. The book recommends modeling places where sensitive data could be intercepted, where URLs can be spoofed or SQL statements injected with malevolent text, and where servers can be attacked, and devising solutions for these weak points.

On a related topic, although your web.config file is secure from prying eyes or from download, you can use built-in methods in the .NET Framework, such as Security.HashPasswordForConfigFile, to encrypt strings that go into your config file. Database connection strings containing an ID and password, for example, would be good candidates for encryption.

graphics/tip_icon.gif

An entire section in the MSDN Library is devoted to secure development. It includes a downloadable PDF file, "Building Secure ASP.NET Applications" (MSDN Library: .NET Development > .NET Security). You might want to download it, but I would caution against printing it outit's more than 600 pages long!


One final word of warning seems appropriate here. Security is an end-to-end proposition. It needs to be evaluated at all levels of the solution. Encrypting an ID or password at one part of the application and sending it in the clear in another part damages the overall security of the application. Like a stereo system, the sound (security) is only as good as the weakest link. The Writing Secure Code book mentioned previously provides a process for ensuring that this doesn't become an issue.

Data Access Layer (DAL)

The Data Access Layer (DAL) is such a large topic, as it includes developing a logical data schema, that I have separated it out into a another chapter, Chapter 8, "Creating the Logical Data Model." This section is just an "honorable mention" so that you readers know I didn't forget. Stay tuned.

A data access helper is the other application block that existed at the time of this writing. Although it most likely needs to be augmented with application-specific code to provide a complete Data Access Layer, it wraps the SQL and OLE DB calls to ADO.NET in such a way that one or two lines of code can often take the place of 10 or 20 lines of code.

Business Logic Layer (BLL)

The Business Logic Layer (BLL), or Business Services Layer, is where the majority of the "rules" governing how an application works are stored. A common strategy is dividing the business layer into two parts:

  • Data objects, which hold state, are limited to properties, and are serializable

  • Business rules classes, which are often stateless and contain mostly methods

The theory is that data objects are populated and passed around the application, carrying state with them. In this way, business rules can receive these stateful data objects as input parameters. This method provides maximum flexibility and scalability, but at the cost of some extra complexity. As you can see in Figure 7.4, the Data Access Layer and the Business Logic Layer avoid holding state; instead, they use serializable, passable business objects to store state information.

Figure 7.4. Populating and passing business objects around.

graphics/07fig04.gif


If done properly, you can design the business classes and components to be reusable within the current organization and maybe even outside the organization. Many frameworks exist on the market because smart design allows for reuse outside the original intentions, just like an interchangeable part used in manufacturing. Reuse has never quite lived up to the expectations heaped on it, but I have seen it first-hand; when done properly, it can be a real financial boon.

The other aspect of the Business Logic Layer is caching static data appropriately. Through the intelligent use of caching, you can improve an application's overall performance manyfold. Unlike the business objects just discussed, caching does involve holding state, but it is generally global state that all users access, not state that's specific to one user or one session.

User Services Layer (USL)

The User Services Layer (USL), often known as the User Interface (UI), is where you need to make a major design decision. Typically, the User Services Layer is separated into two logical sublayers: the Graphical User Interface (GUI), which includes the forms and controls the user actually sees, and the user processes code, which orchestrates how the entire GUI interacts.

Now focus on the same questions posed earlier in the chapter, but with specific requirements for the user interface in mind:

  • Should this solution be a Web application, hosted in a browser?

  • Should this solution be a Windows application, using the available features of a rich, responsive local client?

  • Does this solution require a visible user interface at all (Windows services, Web Services, Remoting server)?

User Services for Web Applications

Instead of a single box representing the User Services Layer, there is usually one box for the ASPX page (the GUI), one box for the "code behind," and another box for the User Services Layer that isolates the ASPX code from the Business Layer (see Figure 7.5).

Figure 7.5. The User Services Layer for Web applications.

graphics/07fig05.gif


graphics/note_icon.gif

The User Services wrapper layer is sometimes referred to as the emissary in Microsoft documentation, but that term seems unlikely to be on the exam.


User Services for Windows Applications

Using the .NET Windows Forms as the user interface for your application gives you an extremely rich and responsive environment. If your solution requires sophisticated, graphical user interaction, you will likely lean toward a Windows GUI. Even with all the improvements in Internet browsers and in the server-side capabilities of .NET, Windows client applications still offer the most options.

Windows applications typically have just two logical components in the User Services Layer: the form itself and the User Services Layer, which again sits between the graphical controls and the business component (see Figure 7.6).

Figure 7.6. The User Services Layer for Windows applications.

graphics/07fig06.gif


Other options, such as Windows Services or XML Web Services, do not have a GUI. Therefore, this layer could possibly be skipped because the main purpose of the User Services Layer is to interact with a live user.

Other services, such as exception handling or security, might reside in the User Services Layer, but are logically separate and discussed later in this chapter.

Before moving on, I should mention one other technology that falls into this area and might appear on the exam: DirectX. Microsoft DirectX enables you to develop extremely rich, multimedia-style graphics, video, 3-D animation, and surround sound. Some kiosk solutions require this level of sophistication for their interface. Any requirements for this level of UI sophistication indicate that a Windows Forms solution is the best choice.

Choosing Between Web Forms or Windows Forms

One of the most crucial decisions is whether to select a Web application or a Windows application. Sometimes, the decision is obvious. If you have an e-commerce site to be used by people all over the world, on their own PCs, you will probably want to create a UI that runs inside the browser. Conversely, if you have a small group of corporate users, all in a single location and using company machines, a Windows application provides the flexibility without the overhead of a server infrastructure.

Sometimes, you will see the need to create both. A hybrid Windows/Web application, my personal favorite, is joined at the Business Layer to provide a secure, rich-client environment for internal users and still offer a thin-client, zero-deployment option for remote users with different usage patterns. If you have two distinct types of user personas (discussed in Chapter 1, "Envisioning the Solution"), the hybrid solution can be an effective logical architecture. It is especially good at providing a secure Windows authentication model for local power users and scaled-back functionality for remote users, lessening your security risks outside the firewall. The trick is to do a good job isolating your User Services Layer from your Business Services Layer. The User Services Layer should be as thin as possible, to the point that it takes very little effort to create both a Web interface and a Windows interface in the application.

Although much of the emphasis in the .NET literature is on Web applications, Windows Forms applications should not be overlooked. In fact, several recent articles have predicted a return to the popularity of Windows client applications (see the articles by Rick Strahl and Jason Clark in "Need to Know More?"). The new deployment model of .NET makes this option less problem prone than in the past. A good illustration of one drawback to using a Web GUI was made clear to me recently. In one of the large bookstore chains over the holiday season, I attempted to use its in-store "title locator," a thinly veiled Web application. The response time reflected the heavy usage over the entire network. If the software had been a rich client interface talking to a Web Service (or a remote business tier), at least the UI would have been responsive, even if the query had taken the same amount of time.

Services and Components

Another architectural question you need to ask is one that didn't get much attention before .NET: Should I create a services-based architecture?

In the days of COM, most interfaces were built to be strongly typed. In the upfront design stages, a lot of time was spent hammering out interface specifications because changing them later was difficult. Now, with services-based architecture, you can create a small, narrow, message-based interface specification and then send whatever you want as message content. This method is analogous to using HTTP or Simple Object Access Protocol (SOAP), in which the protocol specifies the envelope's format, but not the message inside the envelope. This is the essence of a loosely coupled, messaging architecture.

Services are different from standard object-oriented design in that they typically do not hold state. Instead, they rely entirely on whatever data is passed into them as parameters to perform their tasks. The usage pattern is almost always a single call to the service that can run longer than a method call to an object. Because of this usage pattern, well-designed services are usually coarse-grained, another departure from standard object-oriented programming.

Services are also an alternative to callbacks. In a service-based architecture, services hold both behavior and state. Typically, they are also asynchronous in nature.

graphics/caution_icon.gif

A word of caution: Although service architectures are easier to build because of the loose coupling, they are often harder to extend or refactor.


BizTalk Server is based on the concept of loosely coupled services. BizTalk passes messages between disconnected systems, serving as a sort of translatormuch as an English-speaking person and a Russian-speaking person would use an intermediate person who can speak both languages.

Another sort of services-based architecture is Windows services. Creating Windows services has been possible with Visual C++ for some time, but it is new to Visual Basic developers. In addition, it has never been easier to create, install, and debug a Windows service. The main purpose of using a Windows service is its unattended, nonuser-focused nature. Most Windows services are set to run automatically when the machine is booted and run outside the context of any user. User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) listeners, SQL Server, and IIS are some of the more common Windows services.

graphics/note_icon.gif

Although Windows services did not appear to be part of the 70-300 exam, having a complete picture of all the options is important. In addition, Windows services are one of the four major topics covered in detail in the 70-310/320 exam.


For more information on this topic, check the following resources:

  • "Distributed LOB Application Design Using Application Blocks for .NET," an Architect Webcast that first aired on December 10, 2002 (http://www.microsoft.com/usa/webcasts/ondemand/default.asp).

  • "Application Architecture," an edition of "The .NET Show" that first aired on May 3, 2002, which discusses services architecture in even more detail (http://msdn.microsoft.com/theshow/Archive.asp).

  • A good white paper, "Application Architecture: Conceptual View," by Maarten Mullender in the MSDN Library (http://msdn.microsoft.com/library) at Enterprise Development > Architecture > Application Conceptual View > Services.

Managing State

Managing state is a crucial and controversial decision in the life of any solution. By definition, state is the persisting of values during the application's lifetime, as seen by the user. Technically, any time you store data in a database, you are managing state, but I'll limit this discussion to the dynamic state usually kept in memory or temporary storage.

Stateless components and objects are ideal because things run faster and scale better, and there is less chance for confusion or for state data to become out of date. However, even in the Internet world, where stateless behavior is the design model, tricks to hold state are often necessary.

Managing State in a Web Solution

In a Web application, you generally use one (or more) of four techniques for managing state:

  • Session object

  • Application object

  • Cache object

  • Page object (the .ViewState property)

The Session object is a way to hold values specific to a single user over a single session. Values such as a user ID or a shopping cart are typical of items that fall under session state. In a corporate intranet application, you might store Business Layer data objects in session state. For example, if you were writing an intranet application for processing insurance claims, you might hold both the current user's ID and information about the claim (claim number, claimant name, and so forth).

In ASP.NET, the Session object has the capability to store state in-process (in the same memory space), out of process, or in a SQL Server database. This last option allows state to be stored and shared across an entire Web farm, making it easier to dynamically load-balance without concern that the user stay on the same machine. Although still using cookies by default, ASP.NET now supports cookieless mode, embedding the SessionID into the URL and key/value pairs in hidden controls. By default, session state expires after 20 minutes of inactivity, but this value can be adjusted in the web.config file.

The Application object is useful for storing data or objects that apply equally to all users, as in an e-commerce site's product catalog, a "quote of the day," or even a list of states and their abbreviations. The Application object holds on to its state until the Web application is stopped. It is best for holding static or global data that rarely changes. In a Web farm environment, each machine has its own, separate Application object. Unlike the Session object, it cannot be stored in SQL Server (automatically, at least). Therefore, changes to applicationwide state need to be communicated to all Web servers.

The Cache object, new to ASP.NET, is in some ways similar to the Application object, in that it stores globally accessible static data that is available to all users. However, the Cache object has some extra features that make it a better choice when the data being stored is subject to change more frequently.

You store key/value pairs in the Cache object for the same reason you would store them in the Application object: to make them quickly accessible to all users on the Web server. The difference is that you can set up the Cache object to refresh itself automatically, either when the underlying data changes (for example, the underlying XML file is updated) or at a specified time interval. Additionally, the Cache object manages the locking and unlocking required for thread synchronization for you, but the Application object requires explicit code calls to the Lock and Unlock methods.

Also new to ASP.NET is the .ViewState property, contained in the Page object, of each control on the ASPX page. The value for each control is stored in a hidden control. Whenever the page does a PostBack, data is sent to the server in the Request object and returned to the browser in the Response object. This is a simple way to hold the values for a page when no page navigation is occurring (on the same page).

The downside, as you probably suspected, is that extra data is being transmitted over the wire. This extra data might not be noticeable in an intranet application or on an ASPX page with just a few controls, but it should be a consideration when deciding whether to use this feature. You can control the .ViewState property at both the control level and the page level.

For all techniques, state in Web applications is usually in the format of key/value pairs, but the value could be a simple string or a complex custom object, composed of other, more finely grained objects.

Managing State in a Windows Solution

Managing state in a Windows application, although not always desirable from a performance and scalability standpoint, is much easier than in a Web-based application. In a client/server application, state management is common. Each user has his or her own copy of the business services components, or tracking various users is easily accomplished through the use of unique IDs.

Keep in mind that it is almost always better to not hold state. However, if creating stateful objects and components improves the application's performance without significant adverse effects in other areas (such as scalability or maintainability), it should be considered. If you had to set up your desktop environment every time you logged in to your PC (including that fancy wallpaper with the James Bond BMW you have your eyes on), it would clearly hurt your productivity. The same applies to state in your solutions.

Synchronous Versus Asynchronous

Before you dig into this topic, make sure you understand the difference between running tasks synchronously and asynchronously. In a synchronous call, the most common type used in applications, you stop whatever you are doing (see Figure 7.7) and wait for the called code to finish running before you resume. In fact, many code structures rely on this pattern. Calls to methods to perform setup activities to check data validity would be useless if the calling code went right back to running as soon as the call was made. Most actions inside an application are meant to be synchronous, as they must occur in a predictable sequence. Otherwise, you would constantly be subjected to what is commonly termed race conditions, in which results are indeterminate.

Figure 7.7. Synchronous and asynchronous process calls.

graphics/07fig07.gif


However, when you can afford the ambiguity of an indeterminate completion, look to asynchronous calls (shown in Figure 7.7) to assist you. In this case, you ask another process to do something for you, but then you go right back to what you were doing, perhaps servicing user requests from the user interface. You then check occasionally to see whether the asynchronous task is completed, you wait for some sort of callback or reply upon completion, or, in some cases, you don't care when, or if, the asynchronous task is ever completed.

Asynchronous processing can also assist in load-balancing across physical machines and help ensure a more fault-tolerant architecture. The .NET Framework offers a variety of ways to do asynchronous processing. First, take a look at a couple of simple approaches to simulating asynchronous processing. These approaches look like asynchronous operation, but in reality still block other code from running:

  • As in the past, you can use the .Tick event of a Timer control placed on a form. It enables you to interrupt whatever you were doing and do something else, but while you do the "something else," your original code is still blocked.

  • You can use .NET's Application.Idle method to work on tasks that have no specific completion deadline. Whenever the application is idlewaiting for user input, for exampleyou can choose to start doing something else. The trick is to keep these "idle" tasks brief because the user could return at any moment.

If you want to interrupt the current process to do simple tasks of short duration, these two approaches work just fine. However, what if you want to run an intensive section of code in the background, while the user continues to use your application in the foreground, oblivious to what is going on behind the scenes? By prefetching data or running other operations in the background, you can often mask the long latency issues of a local area network (LAN) or wide area network (WAN), just as a good coat of paint can mask mistakes on a wall. If used properly, this method is a wonderful way to improve the perceived performance of your application.

A more legitimate approach to asynchronous programming is the use of .NET's BeginInvoke() and EndInvoke() methods. The main issue with BeginInvoke() is that it still runs on the same thread. However, you can combine the Invoke feature with the use of delegates, which spawn their own thread automatically, to create an asynchronous process, running on a separate thread, without explicitly creating a new thread. By supplying the delegate with the address of the method to be run (using the AddressOf operator), you can "fire and forget," returning almost immediately to your original process. If you want to know more about this approach, read Robert Teixeira's well-written article, with sample code included (Visual Studio Magazine, May 2002, p. 18; http://www.visualstudiomagazine.com).

Finally, you have what you've been waiting for all along. The .NET Framework enables you to easily spawn a new thread and then run code on that thread while your original code continues to run. This capability has always been present in Visual C++, but it is much easier with the .NET Framework. The capability to spawn (create) a new thread is brand new to Visual Basic programmers and is a powerful feature. If it's not used with care, you can get into obscure and hard-to-find defects.

All threading support is located in the System.Threading namespace (the obvious name choice). The main challenge is to make your application "thread safe," meaning that if you reenter a method on a separate thread, you don't access data that is relevant only to the original thread. Ensuring that your application is thread safe is usually an issue of state. Creating tightly scoped variables and avoiding state as much as possible make your job easier.

In addition to these custom-coded techniques, you can also use Microsoft Message Queue (MSMQ; part of COM+) and BizTalk for asynchronous programming, in which a central queue receives your request much as a post office does, and then passes it on to the indicated service (as with an address) for processing.

You could spend an entire chapter or more just on this topic, but it's time to push on. Now that you have all the boxes selected and arranged, how do you get them to interact with each other? Although not explicitly in the Microsoft exam guidelines, the following sections briefly cover protocols and options for the different modules of your solution to communicate.

Neither Here nor There

Between all the functional "boxes" discussed in the previous section, you need some interaction mechanism to share information. In the past, you had primarily COM and DCOM. Now you have SOAP, XML, Web Services, COM (still), and .NET Remoting. Dig a little deeper, and you have HTML, HTTP, UDP, OLE DB, and RPC. The following sections walk you through a few of the major interaction mechanisms.

COM

If you go with the maxim "age before beauty," you should consider COM first. Many of you cut your teeth on COM, and it is still supported in .NET. In fact, when there is a major investment in legacy components, it is wisest to leave parts of the solution in COM and use the .NET Framework's COM Interop feature to protect your investment.

Interaction with COM is accomplished with the help of .NET Interop services (System.Runtime.InteropServices namespace) and marshaling. Interop marshaling is best when limited to simple, native data types. The more complex the type, the worse performance will be. One data type to avoid as much as possible is the Variant data type, commonly used in COM, but unavailable in .NET.

The following sections explore how you can use COM components in your .NET applications and how you can use .NET objects in your COM applications.

Runtime Callable Wrapper

The Runtime Callable Wrapper (RCW) is how you talk to COM from .NET. The RCW is a local proxy wrapper that your code interacts with, allowing the code to think it is talking to just another .NET object. Visual Studio can create an RCW for you automatically if you use the Add Reference context menu (right-click on the References folder in the Solution Explorer window). If you are not using the Visual Studio IDE, you can use TlbImp.exe to accomplish the same end.

Figure 7.8 shows your code calling the RCW, which goes through the .NET Interop services, in turn calling a COM component through its IDispatch interface.

Figure 7.8. The Runtime Callable Wrapper (RCW).

graphics/07fig08.gif


When you are done with a COM object, you need to make sure to "release" it with the .ReleaseComObject function, manually simulating the automatic behavior in COM.

For the record, COM objects can be called "late bound" as well. In this process, the code takes on some of the burden normally handled by the IDE.

For the purposes of this book, just knowing it can be done is enough. If you want to know more, however, check out David Platt's article (MSDN, August 2001, p. 44; http://msdn.microsoft.com/msdnmag) or look into the Activator.CreateInstance class.

graphics/caution_icon.gif

One word of warningin Version 1.0 of the .NET Framework, there is a known issue with COM controls hosted on forms. Apparently, all errors inside the COM component are silent and go unreported to the host form. I'm not sure if this will be changed in future releases of the .NET Framework, but be warned.


COM Callable Wrapper

The COM Callable Wrapper (CCW) is how you expose your .NET objects to the COM world. You do this mostly through the use of various attributes in the class. When exposing a .NET class to COM, you need to supply the following three values (GUIDS) that COM expects: ClassID (CLSID), InterfaceID, and event ID. RegAsm.exe then uses these values to create the necessary Registry entries.

Figure 7.9 shows your COM-based application talking through the .NET Interop service, which has installed the proper Registry entries to fake a standard IDispatch interface that COM can interact with. Your .NET object, safely wrapped inside the managed .NET environment, has no idea it is talking to a COM object.

Figure 7.9. The COM Callable Wrapper (CCW).

graphics/07fig09.gif


If set up properly, including some settings in the Project Property Pages dialog box, .NET generates a COM-style TypeLib and the necessary Registry entries to allow your object to be called from Visual Basic 6.0 or Visual C++ 6.0.

The main value of the RCW and CCW options is to protect your investment in legacy ("classic") code. Of course, there is a penalty in complexity and performance. Weighing options and making good decisions are part of a solution architect's responsibility. Obviously, using CCW costs you a degree of deployment flexibility because you must register classes, as in the days of COM.

graphics/note_icon.gif

Finally, you can hide certain methods from the COM client by setting the ComVisible() attribute to False. Although the Common Language Runtime defaults to True, the IDE overrides the default and sets everything to False. Your best bet is to follow the published Microsoft standards and to set this value explicitly.


COM+

COM+ has a special place in the .NET scheme: providing (as of .NET version 1.0) functionality not available in the .NET Framework. Object pooling, transaction management, synchronization, queued messages (MSMQ), and role-based security, for example, are still reasons to tie your .NET code in with COM+.

For reasons like those just mentioned, calling COM+ features directly, without the Interop wrappers mentioned previously, is built directly into the .NET Framework. COM+ features are now referred to as Enterprise Services and are available in the System.EnterpriseServices.ServicedComponent namespace. .NET objects that use COM+ are referred to as serviced components.

graphics/alert_icon.gif

Serviced components are one of the four main topic areas of the XML Web Services exam, 70-310/320.


XML

In the past, you had several binary standards for data transfer within the confines of an application's space. COM (and DCOM) was a widely used binary standard, even among competing products such as Smalltalk. Common Object Request Broker Architecture (CORBA) was another standard in the object-oriented world, more popular with the Java crowd. Interacting between components that didn't adhere to the same standard was difficult. Complex bridges were built, but no one could accuse them of being easy to use.

XML has the potential to sweep all those problems aside. Now, in a single stream of ASCII data (encrypted if you prefer), you can communicate both data and structure in a way that any component can consume. ASCII is such a prevalent standard that there is virtually no programming language that could not read the information.

But wait, there's more! XML can be transmitted through port 80 of a Web server by piggybacking on top of HTTP. This is done through SOAP, which is basically an envelope with an agreed-on format; the contents, or body, are free-form XML. The whole concept of XML Web Services is based on this beneficial side effect.

Now how much would you pay? There is a price for all this flexibility, and I'm sure you know what it is: performance. Serializing objects or data into ASCII text and then deserializing it (rehydrating it) back at the other end is expensive. Furthermore, the tags XML uses to demarcate the data are considered overhead. If you have a <UserID> tag and a </UserID> closing tag surrounding the number 12, you have 90% overhead to 10% usable datavery expensive!

That being said, with faster WAN bandwidths, faster computers, and the intelligent use of caching, XML can be a great transfer medium between disparate applications and within a single application.

XML Web Services

XML Web Services are very much in vogue these days. Although some security issues still need to be worked out, they seem to be the front-runner for future Enterprise Application Integration (EAI) and Business-to-Business (B2B) communication, possibly even assuming the role Enterprise Data Interchange/Electronic Funds Transfer (EDI/EFT) previously held.

XML Web Services are essentially an abstracted version of SOAP, which is in turn the marriage of two technologies: HTTP and XML. They provide a firewall-friendly transfer medium to allow systems to communicate over the Internet or an intranet and are another option in the solution architect's toolbelt to separate the Business Layer from the User Layer.

Although the other options discussed in this chapter center on an application talking to a live user, Web Services is about an application talking to another application. This might explain why so many of the Microsoft case studies seem to combine a Web application with Web Servicesoften both human interfacing and programmatic interfacing are required in a robust business solution. For example, a utility company allows automatic withdrawal of funds as payment. The customer needs to be able to check payments and amounts owed, but the actual debiting of the customer's bank account would be one system talking to another.

As I'm sure you are aware, entire books are devoted to designing and constructing XML Web Services. Your purpose here, in this book, is mainly to list the technology options available from the .NET Framework, along with tips on when XML Web Services might be the correct choice. It is almost certain that you will not be asked to read or write Web Services code on the 70-300 exam. However, Microsoft does have a separate exam, the 70-310/320, with a primary focus on the technical details surrounding XML Web Services.

.NET Remoting

Remoting rarely seems to get the same amount of coverage as Web Services (even in the exam), but when you own both ends of a connection, remoting can increase performance dramatically. Remoting has the capability to use a binary transfer, in contrast to the ASCII format used in XML Web Services. It is built into the .NET Framework and lives in the System.Runtime.Remoting namespace.

graphics/note_icon.gif

In addition to a binary transfer format, .NET Remoting also has the capability to use the SOAP format or a custom format. The transfer protocol is HTTP or TCP.


Even though remoting is brand-new, designed from scratch for .NET, it has the feel and performance of DCOM. One large difference between remoting and DCOM is that remoting offers many extensibility points where developers can tap into the messages being sent and verify, manipulate, or customize the messages anywhere along the chain. In addition, remoting generally performs better than DCOM, is simpler to set up and use, and has less overhead. A good analogy might be that remoting is to DCOM what an electric pencil sharpener is to a sharp knife.

Remoting has the following three basic types of objects:

  • Single call These objects are server-activated; they handle a single request from a single user and are then disposed. They are, by definition, stateless. The transaction is basically "get in, do your thing, and get out."

  • Singleton These objects are also server-activated, but as you might suspect from the name, they live indefinitely and handle multiple requests from all users. A singleton object is the sort of stateful object you might use to dispense application-wide static data.

  • Client-activated These objects live on the server, but their lifetime is controlled by the client, a single user. These objects also have the capability to be stateful. Using client-activated objects is the mode that most closely resembles DCOM.

Unlike Web Services, which are limited to passing only serializable values (externally exposed and read/write allowed), .NET Remoting can pass the entire object, including derived and hidden object values. This capability is possible because both ends of the transaction "know" about the object, so it can be passed through the ByValue method, completely intact.

Therefore, if performance is crucial and if both ends of the transaction are .NET clients, .NET Remoting might be the best choice.

.NET Remoting Versus XML Web Services

Table 7.1 gives you a quick look at when each technology might be the best choice for your solution.

Table 7.1. Comparing Web Services and Remoting

Feature

Web Services

Remoting

Performance

 

X

Easy to call from other platforms

X

 

Type fidelity with .NET

 

X

Simpler to implement

X

 

Can return entire object versus partial object

 

X

Stateful objects

 

X

Supports raw TCP socket transfers (or HTTP SOAP)

 

X


"Type fidelity" in Table 7.1 refers to object state held in properties that are not part of the publicly exposed interface. These values are lost when sending objects by using Web Services. Also, an "entire object" (the fifth item in Table 7.1) means methods (implementation) being transferred, not just publicly exposed state values.

Even with most of the Xs on the remoting side, Microsoft recommends that the default choice be XML Web Services, primarily because they are simpler to implement and are easily extensible to other platforms. If performance is crucial and you can be assured that you will be communicating between two .NET components, remoting is usually a better choice.

graphics/note_icon.gif

Although remoting does perform better than Web Services, the difference is a lot closer than you might expect. In addition, at times the performance loss is overshadowed by other factors (such as WAN latency or mainframe access delays) so that the difference is statistically insignificant. Make sure to evaluate both options under real-life conditions in your own situation.


Ultimately, time allowing, you might want to use a performance test harness that makes repeated calls to the Web Service to determine whether it falls within your design specifications.

DataSets and Other Miscellaneous Objects

An option that often gets overlooked is the use of native .NET structures and collections. If your components are accessible to each other through early bindingthat is, they reside on the same machineyou can let .NET do all the heavy lifting and you can pass data around by using native .NET DataSets, arrays, hash tables, collections, or even custom objects. In the new disconnected world, it's easy to load up a DataSet object, from a database or from scratch, pass it around, and then discard it or use it to update data stores. Strongly typed DataSets even allow some degree of protection that the data will be in a predictable format and type.

An advantage of talking in DataSets is that you have the option to bind them (at runtime) to .NET GUI controls (both Web and Windows Forms); allow the user to read, delete, update, and create; and then pass the updated DataSet back to your business layer and ultimately your data layer. With disconnected data, this option is not the "no no" it was in the days of Data Access Objects (DAO) or ActiveX Data Objects (ADO).



Analyzing Requirements and Defining. Net Solution Architectures (Exam 70-300)
MCSD Self-Paced Training Kit: Analyzing Requirements and Defining Microsoft .NET Solution Architectures, Exam 70-300: Analyzing Requirements and ... Exam 70-300 (Pro-Certification)
ISBN: 0735618941
EAN: 2147483647
Year: 2006
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net