Trey Research is a graphic design company that does print media and video work. As part of a typical project, Trey Research generates dozens of samples from which the client can choose. This process involves rendering source files into photorealistic three-dimensional images and animated sequences. This rendering process is computationally expensive and can't be performed on the ordinary client workstations used to design the source files. Up to now, a single individual has managed this process: the graphic coordinator. When a sample is completed, the source files are passed along to the graphic coordinator. Usually, an individual places the source files on the internal network and notifies the coordinator by e-mail. The graphic coordinator then runs a custom batch program on a special server computer. This server computer runs the Microsoft Windows operating system and has an impressive complement of hardware, including a RAID array of SCSI drives and eight high-performance CPUs. The Trey Research solution suffers from several problems, however, including the following:
Trey Research is planning to solve these problems on an internal level and is willing to distribute a client program internally. It would like the option, however, of adding remote use (over the Internet) in the future if it moves its Web site to an in-house server. Key AnalysisTrey Research has the hardware it needs. What's lacking is the process. The company needs a convenient, efficient, automated workflow that can route tasks directly to the custom processing application, without requiring any human intervention. This system also needs to send a notification back to the original client when the task is completed. This situation is quite a bit different from the Transact.NET case study in Chapter 17. First of all, the task typically takes much longer (several hours) to complete, and the client isn't necessarily available when it's finished. Similarly, the client and server don't require instantaneous notification. They are best served by a straightforward, reliable infrastructure. For that reason, the ideal solution is unlikely to use events with .NET Remoting. A disconnected message-based approach is more reliable and scalable. Microsoft Message Queuing is a possibility, but it might not be needed if we can develop a centralized interface that allows all clients to submit tasks and get information about outstanding work requests. As with the previous case studies, it is keenly important to separate the various parts of the solution. For example, the code that triggers the batch-rendering process should not be directly embedded in the XML Web service, and all database access should be encapsulated by a dedicated component. In addition, it's important to realize that the rendering process isn't like the typical server-side functionality we've looked at in the two preceding case studies, which focus primarily on adding or retrieving information from a database. Unlike these tasks, the rendering process has several unique characteristics:
Finally, you must consider the security needs of the new system. Initially, it will run on an internal network, which means you can devote less energy to defending against spurious requests and denial-of-service attacks. A practical system is still needed to tie users to specific requests, however, both to prevent confusion and to protect any sensitive content. |