Chapter 11: Using ATL Server Components in Stand-Alone Applications


MANY OF THE CHALLENGES you ll face while developing a Web application are challenges you ll encounter in other applications as well. This chapter deals with reusing ATL Server components in the context of non-Web applications.

The ATL Server framework was designed for high-performing and very scalable Web applications. However, flexibility and modularity were always a priority. Therefore, you are able to use some powerful components of ATL Server out of the Web context. In this chapter we demonstrate this with two of the most useful components: the ATL Server thread pool and the stencil processor.

Reusing the ATL Server Thread Pool

Many of today s applications require multithreading. The most common scenario is a server application that will process multiple requests from multiple clients at the same time. This approach is particularly useful when processing a request involves idle time, which is time when the processor isn t used, such as waiting for a database server to return the response to a query or for a timeout to occur. Using multithreading, the total waiting time (which is the sum of the wait time for all the clients) is dramatically reduced.

Multithreading can be very useful in client applications as well as on server applications. Consider the following scenario, for example: A desktop application is used to query information about customers. The user could send the query and then wait for the response for each customer, or the user could use the wait time to send new requests. Of course, the second approach is more productive. This scenario is implemented as this section s sample (available in the StandaloneThreadpool sample).

General Considerations

A simple approach to the multithreading problem is to spawn a thread for each job that starts. This might work very well for a desktop application, assuming that the user won t enter the information required to start a new job very fast and that the average waiting time for completing the jobs isn t too long. However, if the application is on a server, if the user is an exceptionally fast typist (or can batch commands), or if completing a job takes a very long time, this approach won t work very well. The reason for this is that many threads mean, at the operating system level, many context switches (as well as the added overhead of creating and deleting many threads), a rather time-consuming operation. So, instead of the performance increasing, the overall performance might very well stay the same or even drop.

This is where a thread pool proves useful. A thread pool is a mechanism that s able to queue a large number of jobs for execution on a limited number of threads, and then process the jobs on a first-come, first-serve basis as the working threads become available. A thread pool usually provides an interface for the following operations:

  • Initialization (usually by specifying the number of threads to be created in the thread pool)

  • Posting a job

  • Resizing the thread pool (possibly while running)

  • Shutting down the thread pool

By using a thread pool, the system won t get overwhelmed by the number of spawned threads and the jobs are still executed in a synchronous manner. The user doesn t even need to know that some of his or her requests are queued and processed later.

The ATL Server Thread Pool Class

In this section we discuss how the ATL Server thread pool works. This thread pool is based on I/O completion ports, and it s defined in the file atlutil.h as follows :

 template <class Worker, class ThreadTraits=DefaultThreadTraits>  class CThreadPool : public IThreadPoolConfig 
Note  

For details on the I/O completion ports mechanism, or for the ThreadTraits template parameter, please see the MSDN documentation ( ms-help://MS.VSCC/MS.MSDNVS/fileio/filesio_4z1v.htm and ms-help://MS.VSCC/MS.MSDNVS/vclib/html/vclrfCThreadPool.htm ).

Here you ll focus on the worker template parameter (aka the archetype ). The thread pool class will create an object of this type for each thread in the thread pool. This object knows how to deal with the specific needs of the application. The CThreadPool class can only deal with this kind of object, so it doesn t need to care about the application s specifics.

First of all, the implementation of the worker archetype has to define a type for RequestType . RequestType is a token of information that can be posted to the thread pool and that s enough to identify a job once the thread pool has found an available thread for it. Due to the I/O completion ports “based implementation of the CThreadPool class, this token has to be of a type that can be safely casted to (and from) ULONG_PTR .

Happily, ULONG_PTR is a type perfectly compatible with a pointer type (actually, it s intended for casting a pointer to an unsigned long for pointer arithmetic) and, even more happily, all the pointers are the same size . Therefore, the worker implementation can easily define RequestType to be any pointer, even a pointer to whatever struct or class you use to contain a job description. This is what the default ATL Server usage of CThreadPool does in the CIsapiWorker class ( RequestType is an AtlServerRequest* ) and also what the StandaloneThreadpool sample does (which we discuss later in this chapter). Just one note before going further: Although a pointer is a very convenient underlying type for RequestType , the type doesn t have to be a pointer. For example, it could very well be the ordinal index of the job in an array of jobs.

Once the RequestType is defined, the worker class should implement the following methods :

 BOOL Initialize(void* pvWorkerParam);  void Execute(RequestType request,  void* pvWorkerParam,                     OVERLAPPED* pOverlapped);  void Terminate(void* pvWorkerParam); 

Whenever a thread is idle, the thread pool class will fetch a job description token ( RequestType ) from the queue and pass it to the worker object of the idle thread by invoking Execute with the job description token as parameter.

Note  

pvWorkerParam is a global parameter that can be passed in during the initialization of the CThreadPool class and will be forwarded to all the calls to the worker objects for all the threads in the thread pool. This parameter could be a pointer denoting some global configuration object or some global service provider object necessary during job processing, but it could be NULL if it isn t needed.

Now, to use the CThreadPool class in any application, you have to perform the following steps:

  1. Define the data structure that identifies a job to be processed asynchronously.

  2. Implement the worker archetype and perform the real job execution in the Execute method of the implementation.

  3. Instantiate a CThreadPool specialization based on the worker class just defined.

  4. Start using this specialized instance.

This is exactly what the sample code does. In the next section you ll take a step-by-step look at how the steps we just described are performed.

ATL Server Thread Pool Class Sample

The scenario is the following: In order to better help its customers, a customer support center needs an application that allows retrieval of customer- related information from a database (once the customer has identified him- or herself). Assuming that the database retrieval takes a few seconds, maybe minutes, the operator could help one customer while looking up the information for another.

Overall, the application will take the name of the customer as input (say, Suzy Q) and return information about Suzy, as follows:

 Lookup for Suzy Q SUCCEEDED:          Age: 56          Location: Redmond, WA          Product: Sport shoes  ------------------ 

A lookup job can be described by the name of the customer to look for. Once the lookup is completed, you need a way to notify the main application thread of the completion so that the results can be displayed to the operator. Therefore, the job can be described by a structure defined as follows:

 struct stJobDescriptor  {        CStringA        strUserName;        JOB_COMPLETION_CALLBACK        pfnCompletionCallback;  }; 

Also, a way of storing the lookup results would help:

 struct stResultDescriptor  {        CStringA      strUserName;        int           nAge;        CStringA      strLocation;        CStringA      strProduct;  }; 

pfnCompletionCallback is the completion notification mechanism. It s a pointer to a routine defined as follows:

 typedef void (*JOB_COMPLETION_CALLBACK)(HRESULT hRet, stResultDescriptor& result); 

Whenever a job is complete, the completion routine associated with the job will be invoked with the return code and the results of the lookup ( hRet and result ) as parameters.

Once the job description is complete, you move to step 2, which is implementing the worker archetype. (Please see the WorkingThreadPool.h file in the sample s directory for the code discussed here.)

The worker implementation ( CSampleDatabaseWorker ) will start by defining RequestType :

 public:      typedef stJobDescriptor* RequestType; 

As the CThreadPool class uses Worker::RequestType in the implementation, it s mandatory that this definition appears at public scope.

For this simple case, there s no need for initialization or termination code, so the Initialize and Terminate implementations won t do anything. However, these functions are called as a thread in the thread pool is created or terminated , respectively. They re not invoked for each job. Therefore, these functions are the best place to instantiate/release some per-thread cached objects (as an ISAXXMLReader pointer, see the CIsapiWorker class in atlisapi.h, or a database connection object) that will be used during job processing.

Now, the main job processing routine: Execute . It takes as a parameter a Worker::RequestType object, and therefore it can be safely implemented as follows:

 void Execute(stJobDescriptor* pRequestInfo, void* pvParam,                            OVERLAPPED *pOverlapped) 

because in this implementation RequestType is a pointer to a stJobDescriptor structure.

This sample code simulates a time-consuming database call by sleeping for 10 seconds, then picking up some random results from a predefined set. To increase the realism of the simulation, some calls will fail.

The important part comes after the database call simulation, at the end of the Execute implementation:

 if(pRequestInfo->pfnCompletionCallback)  {  pRequestInfo->pfnCompletionCallback(hrPseudoDBResult, result);  } 

The callback routine for displaying the search results is invoked, taking as parameters the lookup return code and the results of search, if any.

Now that the implementation of the worker archetype is complete, you can move to steps 3 and 4, instantiating the thread pool and using it. To clarify the code, use the following definition:

 typedef CThreadPool<CSampleDatabaseWorker>    CSampleProcessingPool; 

Then you perform the instantiation of the thread pool and the usage in the StandaloneThreadpool.cpp file in the main function, as shown here:

 CSampleProcessingPool    threadPool;  // Initialize the thread pool to 4 threads per CPU  threadPool.Initialize(0, -4);   // Queuing a new request  if(!threadPool.QueueRequest(pJobDesc))   // Shutting down the thread pool  threadPool.Shutdown(); 

Results and Conclusions

After running the sample application with a few customer names , the screen looks like this:

 Enter customer name, then <ENTER> to lookup, or 'Q' to quit:John Doe  Lookup job started...  Enter customer name, then <ENTER> to lookup, or 'Q' to quit:Suzy Q  Lookup job started...  Enter customer name, then <ENTER> to lookup, or 'Q' to quit:PRANISH KUMAR  Lookup job started...  Enter customer name, then <ENTER> to lookup, or 'Q' to quit:  Lookup for John Doe SUCCEEDED:          Age: 11          Location: Boise, ID          Product: Sport shoes  ------------------ Lookup for Suzy Q SUCCEEDED:          Age: 56          Location: Redmond, CA          Product: Sport shoes  ------------------ Lookup for PRANISH KUMAR FAILED:  ------------------ 

This should show pretty clearly that the jobs were simultaneously executed.

What isn t shown on the output screen but can be easily verified is the time difference between this solution and the serialized one (the one-by-one execution of the jobs). By serializing these calls, the total processing time would have been ~30 seconds (because you know that each call takes ~10 seconds to be processed). Using the thread pool, the total processing time is around 12 seconds.

Of course, this result isn t typical for a database client. The real result depends a lot on the application specifics. If the job processing has a long idle time (e.g., interrogating a database server), then more threads in the thread pool might be helpful. On the other hand, if the job processing is CPU- intensive and contains many computations , then a large number of threads in the pool might actually hurt the performance.

Here s a summary of what you ve learned in this section:

  • ATL Server provides a very flexible thread pool class that you can easily use in generic applications.

  • The model for using this thread pool requires an implementation of the worker archetype, an instance of which will live in each of the threads in the pool.

  • This per-thread instance will perform the actual job processing.

  • The worker implementation is also a good place to store per-thread processing helpers, as an ISAXXMLReader pointer or a database connection object.

  • The developer is responsible for collecting the job processing results (either by using them during processing or by saving them at the end of the Execute method).




ATL Server. High Performance C++ on. NET
Observing the User Experience: A Practitioners Guide to User Research
ISBN: B006Z372QQ
EAN: 2147483647
Year: 2002
Pages: 181

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net