Smart Clients

At the start of the client/server evolution, there were mainframes and dumb terminals. The terminals were dumb in that all they did was enable the user to input data and submit that data to the mainframe for processing. For a while, that was enough.

Then came the advent of the PC (personal computer) and the model shifted a bit. The shift occurred because it became apparent that if they could network a bunch of personal computers together and have the software run on the client as a "fat" client, the processing load could be distributed, saving expensive time on the mainframe. (In the past, processor cycles on the mainframe used to be billable.) This enabled the user to have a much more pleasurable experience in that the software had a rich GUI (graphical user interface).

After the PC and fat client came the browser-based model. Among other things, this model allowed for a single point distribution and automatic updates of the software. This model also took away the rich client and made the PCs into pseudo dumb terminals. The term dumb terminal was also used in the first sentence of this section. Yes, computing had come full circle in its evolution and was back to the mainframe and dumb terminal. The difference was that the extremely expensive piece of equipment from IBM was done away with and replaced with PC server and client machines.

Anytime you start with something, move to something else, and then return to the starting point, you wonder why you changed in the first place. Even more so, you wonder why you came back to the thing that you determined was inadequate enough to move from in the first place. In the following sections, you will learn why the next evolution takes the best parts of the rich client model and the browser-based model and combines them into one model: the smart client.

Understanding the Smart Client

To understand a smart client, as shown in Figure 19.1, it is helpful to define the term. A smart client is an application that leverages the best features of rich-client and web-client applications. As defined by Microsoft (, a smart client is characterized by the following attributes:

  • High fidelity user experience The smart client leverages the latest graphics technology to bring a high fidelity experience to the user. In addition, it is personalized for each user based upon the context of the presentation.

  • Intelligent connection The smart client is capable of working in online or offline mode. It takes advantage of local data caching and processing. In addition, the smart client is a distributed application, taking advantage of web services.

  • Information-centric In the smart client, data access is loosely coupled, and easy to retrieve, cache, and post.

  • Designed for operations Smart clients make use of local CPU processing intelligently, and are secure, centrally deployable, and versionable.

Figure 19.1. The smart client.

By looking at the earlier definition, you might think that it's quite a list of things for an application to perform. However, decisional really means that when you make a decision about what type of application to builda rich client or web applicationit is not an all-or-nothing decision. A smart client application makes intelligent use of the best features of both programming paradigms. Here is a list of the most commonly included features to aid you when making a decision about what features to leverage in a smart client:

  • Deploy the application and updates from a centralized server.

  • Use web services to provide richer functionality.

  • Use local processing power of the device when it is advantageous.

  • Make use of online and offline functionality. This ensures that the user is always capable of being productive even when a connection to the server is not possible.

In the following sections, you will learn how to create a smart client application using each of the four bullet points just described.

Deploying Smart Client Updates from a Centralized Server

One of the first things you ask a client who is having difficulty with an application (other than "Is the power on?") is "What version do you have?" This is a necessary question to determine whether the client has the most recent version of your software. If not, you are wasting your time because the client's problem might have already been fixed in the latest version of the application.

To avoid this problem, you can deploy your application to a centralized server and tell everyone to check for updates there. This enables you to sidestep the issues that arise when a co-worker passes out the three-year-old copy of your software that she saved on a CD in her filing cabinet. However, this solves only part of the trouble. Just because someone knows that there is an update to the software doesn't mean that she will actually update the program. That kind of person understands only one thing: a forced update. But how do you know that this employee is running an older version of software in the first place? How do you force him to install the update? The answer is a self-updating application. There are many methods to achieve this goal, and the next few sections describe how to do so using .NET.

Web Server Deployment

Deploying your applications from a web server is easy and can offer quite a few advantages. Some of those advantages are as follows:

  • Ensures that the client is always using the latest versionThis avoids any problems with employees who don't want to upgrade. They no longer have an option.

  • Assemblies are downloaded when needed or referencedThis means that the assembly will be downloaded only when it is referenced by the client, and then only when the local copy is out of date.

  • Local copies of the assemblies are used when they're up to dateTherefore the network is not overburdened by unnecessary downloads of the same assembly.

There are three basic ways to deploy your application from a web server: using a URL-based executable, using the Assembly.LoadFrom method, and creating a custom self-updating application. These methods are discussed individually in the sections that follow this one.

Deploying a Smart Client with a URL-Based Executable

Using a URL-based executable is very easy. Essentially all you have to do is create a virtual directory on a web server and place all of your assemblies inside it. You can then deploy your application via a web link such as http://companysite/myapplication.exe. When a user launches your application, .NET automatically searches the specified location for every referenced assembly unless instructed not to do so.

Using the Assembly.LoadFrom Method

Using the System.Reflection method Assembly.LoadFrom is useful in two situations. The first is when you need to load an assembly that is physically located in a location that is not in .NET's assembly resolution path. The second situation is when you create a stub application whose sole purpose is to load the real functionality of the application. That stub application typically contains very little code; it simply loads and kicks off the main method for the application. Listing 19.1 shows a typical stub application. These types of stubs are often referred to as loaders or front-end loaders.

Listing 19.1. Stub Application Using the Application.LoadFrom Method
    using System;    using System.Reflection;    namespace SAMS.VisualCSharpDotNetUnleashed.Chapter19    {        /// <summary>        /// Summary description for Form1.        /// </summary>        public class StubApplicationClass        {          public static void Main()          {          Assembly.LoadFrom(             "http://localhost/SampleApplication/SampleApplicationLibrary.dll");            // Create a object for the Class            Type typeContent = assemblyContent.GetType(sClassName);            try            {              // Find the main method and start the execution.              typeContent.InvokeMember ("Main", BindingFlags.Public | BindingFlags .InvokeMethod |                                        BindingFlags.Static, null, null, null);            }            catch(Exception e)            {              // Code omitted for brevity            }          }        }      }    } 

Creating a Custom Self-Updating Application

Creating a custom application for self-updating can be the best solution for your needs, but can also prove to be a challenge. The first thing you need to do when creating a custom solution is decide on three important things: Where do you check for updates, when do you check for updates, and how do you check for updates? The answers to these three questions will have a great influence on how you choose to implement the custom solution.

Determining the Location to Check for Updates

Deciding where to check for updates should be a fairly simple question to answer. Your code needs to determine if it should check a location on the network or check anywhere on an intranet or the Internet. If you want to check a location on the network, you can use simple file operations as a method for checking for necessary updates. However, if your answer was an intranet or the Internet, you will have to go with something such as an HTTP protocol or web services. For more information about file operations, see Chapter 7, "File and Stream I/O and Object Persistence." For more information about web services, see Chapter 32, "Introduction to Web Services," and Chapter 33, "Using WSE 2.0."

Determining When to Check for Updates

Although the question of when to check for updates seems like a simple question, it is doubtful that you want to make that assumption for the client. For example, in the case of virus-checking software, you might want to check for updates every five minutes, every hour, or every day. For other applications, it might be that you want to check for updates daily, weekly, or every time the application starts. Some people don't want an application to make a decision for them. For them, the task of checking for updates is a manual process. They want to tell the application when it is convenient to check for updates.

Determining How to Check for Updates

The task of actually checking for updates can be solved in a number of ways. In the first approach, you could check the server and directly compare the versions of assemblies with the ones that the application is currently using.

Alternatively, you could use a manifest approach. In this technique, you place a manifest of the assemblies, along with the current version numbers and download locations, on the server. From there the software can download the manifest and check the assemblies that reside on the local machine to see whether they are up to date. This has an advantage over the direct method in that if an additional assembly is required in the update, it can be included in the update as well. For example: If your application starts with two assemblies and checks for updates on the server, it might find that one of the assemblies is out of date and download the update. However, because the application doesn't know about the third assembly, it can't check the version and therefore doesn't download the newly required file.

As a third option, you could use a web service. This approach enables you to use a web service to receive a manifest. From there, the same rules apply here that were in effect for the manifest download. This gives you an advantage over both of the previously described methods in that you can tailor the manifest to each user. In other words, if you have two classes of users and an update is available for one class, you could send the new manifest to that class. When users from the other class log in, they would not receive an updated manifest and would therefore not get the update. This can be useful in the case of a beta program. For example, if you want to revoke a user's ability to test the software, you could refuse to send an updated manifest and therefore inhibit that user's ability to participate in the program.

After you've figured out that files need to be updated, you need to decide how to best accomplish this. When you decide to update an application, two things must happen: The download of the updated files must commence and the application cannot be left in an unusable state if the download fails. Downloading the updated files presents a few challenges: The server from which you are downloading could go down; your Internet connectivity could be interrupted; the power could go out; or the user could terminate the process. If any of these things happen, it is considered unacceptable to leave the application in any state other than usable. For that reason, the download must take place in such a way that it does not interfere with the application until the update process has actually started.

For the purposes of this discussion, the update process is considered to start after all files have been downloaded and verified to be okay. To accomplish this, you could design your download component to run as a separate service similar to the Windows XP update service (BITS). In this paradigm, you would first download the manifest and then download each assembly one at a time. After downloading an assembly, you would check off that item as complete. If at any time a failure occurs, such as the server going down, power outage, or user terminating the service, you could resume at the point after the last successful download when the service is restarted.

After the chore of downloading all the necessary files comes the necessity of actually updating the system to the new version. With this task comes another set of opportunities. First, if you are running the application (it is assumed that you are or you wouldn't have found the updated files), how do you update files that are in use? If you terminate the application so that the files are no longer in use, you terminate the process that was updating these files. If you don't, you can't update anyway.

There are two possible solutions. In the first approach, you spawn a separate process that updates the application and then terminate the existing application. However, with this approach, you would need to know how to update the application if it needed to be updated. The second approach is to create a stub application that does nothing more than launch the real application. By doing this, you can download the new files and place them in an alternative location (such as a sub folder). This enables you to build a new version of the application while still keeping the old one (just in case something breaks). Then all you would have to do is update the stub application to point to the new executable.

Make Use of Web Services for Smart Client Back-End Support

Another key point in the smart client is its capability to work in a distributed environment. This allows the smart client to offload some of the processing to a server. One way to accomplish this is through the use of web services. Web services are a way of distributing an application using platform-neutral protocols. With Visual Studio .NET, you can write an application that consumes a web service with just a few lines of code. Listing 19.2 shows the source code for a console application that connects to through its published web service API. To create an application that consumes's Keyword Search method in the Amazon Web Services (AWS) Kit version 3.0, follow these steps:


Start Visual Studio .NET and create a console application in a new solution.


Right-click on the project and select the Add Web Reference menu item, as shown in Figure 19.2.

Figure 19.2. Select the Add Web Reference menu item.


In the Add Web Reference dialog, shown in Figure 19.3, enter the address in the URL text box and press the Enter key.

Figure 19.3. Connect to and add a web reference.


When the web service is located, you will see a list of all available methods in this web service, as you saw Figure 19.3. Click the Add Reference button to add the web reference.

The web reference is now added to the solution and you can access the functionality just as you would any other class, Figure 19.4.

Figure 19.4. The web reference to is complete.

As shown in Listing 19.2 , you create an instance of the AmazonSearchService class in the same way you create an instance of any other class. After the instance has been created, you can call the method KeywordSearch. This enables you to search for anything that matches the keywords that were entered earlier in the application.

Listing 19.2. Connecting to's Web Service
    using System;    using;    namespace SAMS.VisualCSharpDotNetUnleashed.Chapter19    {      /// <summary>      /// Summary description for WebServiceConsumer.      /// </summary>      class WebServiceConsumer      {        /// <summary>        /// The main entry point for the application.        /// </summary>        [STAThread]        static void Main(string[] args)        {         AmazonSearchService amazonSearch  = new AmazonSearchService();          KeywordRequest      keywordReq    = new KeywordRequest();          keywordReq.devtag                 = "DJYVGQ3AW0XX8";          keywordReq.mode                   = "books";          keywordReq.type                   = "heavy";          keywordReq.sort                   = "+pmrank";          ProductInfo productInfo           = null;          try          {            System.Console.Write("Enter the keywords and press enter: ");            keywordReq.keyword = System.Console.ReadLine();          productInfo = amazonSearch.KeywordSearchRequest(keywordReq);          }          catch (Exception ex)          {            System.Console.WriteLine(ex.Message);          }          if (productInfo != null)          {            System.Console.WriteLine("{0} items were returned in your search.",      productInfo.Details.Length);            for(int i=0; i<productInfo.Details.Length; i++)            {              System.Console.WriteLine("{0}", productInfo.Details[i].ProductName);            }          }          System.Console.WriteLine("** Press the Return Key to Continue. **");          System.Console.ReadLine();        }      }    } 

Web services give you the power to distribute your application. They also give you the freedom to easily publish an API to your system for external users to use. This opens up your application to a whole new world of users and developers. For more information on web services, please see Chapters 32 and 33.

Deciding Whether to Process on the Server Side or Client Side for Efficiency

In a normal web application, all the work is performed on the server. The advantage of doing this is that it enables you to create one monster server (or farm of servers) that has enough horsepower to accomplish all of your tasks. The problem with this is that all the work is done on the server. Doing all the processing on the server can be both an advantage and a disadvantage, depending on the situation. Being able to choose when and where such work is performed gives your application quite a bit of flexibility.

For example, suppose that you had to do some special processing on a string. You would have to get the text, send it to the server, and wait for the server to return with the modified text. This is a simple task that could have been done instantly (at least from the user's perspective) and now the user has to wait for the response from the server. From the perspective of responsiveness, performing simple tasks locally allows the user to have a more pleasurable experience. Performing tasks locally also enables you to take advantage of local APIs and applications. As an example, try to use a GDI+ library on the server and have it render locally. Sorry, it can't be done.


Although web-based applications can use client-side scripting and ActiveX controls to perform some tasks locally, they come with limitations. For example, scripted applications cannot read or write from a user's local disks. In addition, client scripts cannot interact with locally installed applications such as Microsoft Word or Excel. Additionally, users have the ability to disable client-side script as a security precaution, effectively breaking your application.

Make Use of Online and Offline Functionality

Smart clients have another advantage: They have the capability to work offline. Have you ever tried to do that with a web application? Unless you install the web server and all of its functionality locally, it will be difficult to work offline.

For many professions, it is a requirement that the system be able to work offline. For example, imagine that you are a nurse who works for a homecare agency. Your job is to go out and visit patients in their homes. When you arrive at the patient's home, do you ask if you can borrow the telephone line before you access the patient's records? How about if you plugged your cell phone into the computer and tried to connect to the Internet from inside a home when there is no signal on the cell phone? The only way to reliably accomplish these tasks is to have an application that can connect to the server, download the information, and then store it locally. This enables you to start the application at a later time and access the information without being connected to the server.

    Visual C#. NET 2003 Unleashed
    Visual C#. NET 2003 Unleashed
    ISBN: 672326760
    EAN: N/A
    Year: 2003
    Pages: 316 © 2008-2017.
    If you may any questions please contact us: