8.1 Overview

Ru-Brd

Chapter 3 described the ACE Reactor framework, which is most often used with a reactive I/O model. An application based on this model registers event handler objects that are notified by a reactor when it's possible to perform one or more desired I/O operations, such as receiving data on a socket, with a high likelihood of immediate completion. I/O operations are often performed in a single thread, driven by the reactor's event dispatching loop. Although reactive I/O is a common programming model, each thread can execute only one I/O operation at a time. The sequential nature of the I/O operations can be a bottleneck since applications that transfer large amounts of data on multiple endpoints can't use the parallelism available from the OS and/or multiple CPUs or network interfaces.

One way to alleviate the bottlenecks of reactive I/O is to use synchronous I/O in conjunction with a multithreading model, such as the thread pool model in Chapter 6 or the thread-per-connection model in Chapter 7. Multithreading can help parallelize an application's I/O operations and may improve performance. However, adding multiple threads to a design requires appropriate synchronization mechanisms to avoid concurrency hazards, such as race conditions and deadlocks [Tan92]. These additional considerations require expertise in concurrency and synchronization techniques. They also add complexity to both design and code, increasing the risk of subtle defects. Moreover, multithreading can incur non-trivial time/space overhead due to the resources needed to allocate run-time stacks, perform context switches [SS95b], and move data between CPU caches [SKT96].

A proactive I/O model is often a more scalable way to alleviate reactive I/O bottlenecks without introducing the complexity and overhead of synchronous I/O and multithreading. This model allows an application to execute I/O operations via the following two phases:

  1. The application can initiate one or more asynchronous I/O operations on multiple I/O handles in parallel without having to wait until they complete.

  2. As each operation completes, the OS notifies an application-defined completion handler that then processes the results from the completed I/O operation.

The two phases of the proactive I/O model are essentially the inverse of those in the reactive I/O model, in which an application

  1. Uses an event demultiplexer to determine when an I/O operation is possible, and likely to complete immediately, and then

  2. Performs the operation synchronously

In addition to improving application scalability via asynchrony, the proactive I/O model can offer other benefits, depending on the platform's implementation of asynchronous I/O. For example, if multiple asynchronous I/O operations can be initiated simultaneously and each operation carries extended information, such as file positions for file I/O, the OS can optimize its internal buffering strategy to avoid unnecessary data copies. It can also optimize file I/O performance by reordering operations to minimize disk head movement and/or increase cache hit rates.

The ACE Proactor framework simplifies the development of programs that use the proactive I/O model. In this context, the ACE Proactor framework is responsible for

  • Initiating asynchronous I/O operations

  • Saving each operation's arguments and relaying them to the completion handler

  • Waiting for completion events that indicate these operations have finished

  • Demultiplexing the completion events to their associated completion handlers and

  • Dispatching to hook methods on the handlers to process the events in an application-defined manner

In addition to its I/O- related capabilities, the ACE Proactor framework offers the same timer queue mechanisms offered by the ACE Reactor framework in Section 3.4.

This chapter describes the following ACE Proactor framework classes:

ACE Class

Description

ACE_Handler

Defines the interface for receiving the results of asynchronous I/O operations and handling timer expirations.

ACE_Asynch_Read_Stream

ACE_Asynch_Write_Stream

ACE_Asynch_Result

Initiate asynchronous read and write operations on an I/O stream and associate each with an ACE_Handler object that will receive the results of those operations.

ACE_Asynch_Acceptor

ACE_Asynch_Connector

An implementation of the Acceptor-Connector pattern that establishes new TCP / IP connections asynchronously.

ACE_Service_Handler

Defines the target of the ACE_Asynch_Acceptor and ACE_Asynch_Connector connection factories and provides the hook methods to initialize a TCP / IP -connected service.

ACE_Proactor

Manages timers and asynchronous I/O completion event demultiplexing. This class is analogous to the ACE_Reactor class in the ACE Reactor framework.

The most important relationships between the classes in the ACE Proactor framework are shown in Figure 8.1 (page 260). These classes play the following roles in accordance with the Proactor pattern [POSA2]:

Figure 8.1. The ACE Proactor Framework Classes

  • Asynchronous I/O infrastructure layer classes perform application-independent strategies that initiate asynchronous I/O operations, demultiplex completion events to their completion handlers, and then dispatch the associated completion handler hook methods.The infrastructure layer classes in the ACE Proactor framework include ACE_Asynch_Acceptor , ACE_Asynch_Connector , ACE_Asynch_Result , ACE_Asynch_Read_Stream , ACE_Asynch_Write_Stream , and various implementations of ACE_Proactor . The infrastructure layer also uses the ACE_Time_Value and ACE timer queue classes from Sections 3.2 and 3.4.

  • Application layer classes include completion handlers that perform application-defined processing in their hook methods. In the ACE Proactor framework, these classes are descendants of ACE_Handler and/or ACE_Service_Handler .

The power of the ACE Proactor framework comes from the separation of concerns between its infrastructure classes and application classes. By decoupling completion demultiplexing and dispatching mechanisms from application-defined event processing policies, the ACE Proactor framework provides the following benefits:

  • Improve portability. Applications can take advantage of the proactive I/O model on many platforms that have diverse asynchronous I/O mechanisms. It uses overlapped I/O on Windows (requires Windows NT version 4.0 and higher) and the Asynchronous I/O (AIO) option of the POSIX.4 Realtime Extension standard [POS95] on platforms that implement it, including HP-UX, IRIX, Linux, LynxOS, and Solaris.

  • Automates completion detection, demultiplexing, and dispatching. The ACE Proactor framework isolates native OS I/O initiation and completion demultiplexing APIs, as well as timer support, in infrastructure layer framework classes. Applications can use these object-oriented mechanisms to initiate asynchronous operations, and only need to implement application-defined completion handlers.

  • Support transparent extensibility. As shown in Section 8.5, the ACE Proactor framework uses the Bridge pattern [GoF] to export an interface with uniform, well-defined behavior. This design allows the framework internals to change and adapt to varying OS-provided asynchronous I/O implementations and shortcomings without requiring any application-layer changes.

  • Increase reuse and minimize error-prone programming details. By separating asynchronous I/O mechanisms from application-defined policies and behavior, the ACE Proactor framework can be reused across many diverse application domains.

  • Thread safety. Applications can use I/O parallelism offered by the OS platform without needing complicated application-level synchronization strategies. When application operation initiation and completion handling code is processed in a single thread, there are only simple data access rules to follow, such as "don't manipulate a buffer that's been given to the OS for I/O before that I/O is complete."

The ACE Proactor framework is a whitebox framework since networked application event handlers must descend from ACE_Handler , similar to the ACE Reactor framework.

The following sections motivate and describe the capabilities of each class in the ACE Proactor framework. They also illustrate how this framework can be used to apply asynchronous I/O to our client logging daemon. If you aren't familiar with the Proactor pattern from POSA2, we recommend that you read about it first before delving into the detailed examples in this chapter.

Ru-Brd


C++ Network Programming
C++ Network Programming, Volume I: Mastering Complexity with ACE and Patterns
ISBN: 0201604647
EAN: 2147483647
Year: 2002
Pages: 65

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net