Defining the Problem

Trey Research is a graphic design company that does print media and video work. As part of a typical project, Trey Research generates dozens of samples from which the client can choose. This process involves rendering source files into photorealistic three-dimensional images and animated sequences. This rendering process is computationally expensive and can't be performed on the ordinary client workstations used to design the source files. Up to now, a single individual has managed this process: the graphic coordinator.

When a sample is completed, the source files are passed along to the graphic coordinator. Usually, an individual places the source files on the internal network and notifies the coordinator by e-mail. The graphic coordinator then runs a custom batch program on a special server computer. This server computer runs the Microsoft Windows operating system and has an impressive complement of hardware, including a RAID array of SCSI drives and eight high-performance CPUs.

The Trey Research solution suffers from several problems, however, including the following:

  • The graphic coordinator is often so busy managing the transfer of files for various projects that little time is left for other work.

  • The rendering process is long, and the projects are time-sensitive. Unfortunately, there is no way to start the rendering process unless the graphic coordinator is available, which leads to some last minute, late-night work binges.

  • There's no automated way to track projects as they are submitted and completed. The graphic coordinator is completely responsible for fielding questions about in-progress rendering operations, completed tasks, and unrecoverable errors. Sometimes projects are submitted more than once, leading to an increased burden on the server.

  • There's no way to attribute projects to their owners. The graphic coordinator is forced to spend extra time tracking down the appropriate individual when a rendering operation is complete, and sometimes the results are delivered to the wrong people.

  • The graphic coordinator retains critical pieces of information about the system and current workflow. If the graphic coordinator is ill (or worse yet, leaves the company), work is interrupted.

Trey Research is planning to solve these problems on an internal level and is willing to distribute a client program internally. It would like the option, however, of adding remote use (over the Internet) in the future if it moves its Web site to an in-house server.

Key Analysis

Trey Research has the hardware it needs. What's lacking is the process. The company needs a convenient, efficient, automated workflow that can route tasks directly to the custom processing application, without requiring any human intervention. This system also needs to send a notification back to the original client when the task is completed.

This situation is quite a bit different from the Transact.NET case study in Chapter 17. First of all, the task typically takes much longer (several hours) to complete, and the client isn't necessarily available when it's finished. Similarly, the client and server don't require instantaneous notification. They are best served by a straightforward, reliable infrastructure. For that reason, the ideal solution is unlikely to use events with .NET Remoting. A disconnected message-based approach is more reliable and scalable. Microsoft Message Queuing is a possibility, but it might not be needed if we can develop a centralized interface that allows all clients to submit tasks and get information about outstanding work requests.

As with the previous case studies, it is keenly important to separate the various parts of the solution. For example, the code that triggers the batch-rendering process should not be directly embedded in the XML Web service, and all database access should be encapsulated by a dedicated component. In addition, it's important to realize that the rendering process isn't like the typical server-side functionality we've looked at in the two preceding case studies, which focus primarily on adding or retrieving information from a database. Unlike these tasks, the rendering process has several unique characteristics:

  • It takes a significant amount of time.

    There's no good reason to tie up an XML Web service or a remote component for this amount of time because it can quickly use up the available thread pool and be unable to handle incoming requests.

  • It is computationally expensive.

    Because this process has specific needs and imposes an increased demand on the server, it should be isolated from other parts of the system. This prevents service disruption if a software error occurs.

  • It is not always available.

    Trey Research can shut down the rendering program when it needs the server for another task. This shouldn't stop other users from submitting requests, checking the progress of existing requests, and downloading rendered files.

Finally, you must consider the security needs of the new system. Initially, it will run on an internal network, which means you can devote less energy to defending against spurious requests and denial-of-service attacks. A practical system is still needed to tie users to specific requests, however, both to prevent confusion and to protect any sensitive content.



Microsoft. NET Distributed Applications(c) Integrating XML Web Services and. NET Remoting
MicrosoftВ® .NET Distributed Applications: Integrating XML Web Services and .NET Remoting (Pro-Developer)
ISBN: 0735619336
EAN: 2147483647
Year: 2005
Pages: 174

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net