Ru-Brd |
MotivationAsynchronous I/O operations are handled in two steps: initiation and completion . Since multiple steps and classes are involved, there must be a way to demultiplex the completion events and efficiently associate each completion event with the operation that completed and the completion handler that will process the result. The diversity of OS asynchronous I/O facilities plays a deeper role here than in the reactive I/O model because
Thus, the chain of knowledge concerning platform-specific mechanisms and data structures runs from initiation operations through dispatching and into completion handling. In addition to being complicated and hard to reimplement continually, it's easy to tightly couple proactive I/O designs. To resolve these issues and provide a portable and flexible completion event demultiplexing and dispatching facility, the ACE Proactor framework defines the ACE_Proactor class. Class CapabilitiesACE_Proactor implements the Facade pattern [GoF] to define an interface that applications can use to access the various ACE Proactor framework features portably and flexibly. This class provides the following capabilities:
The interface for ACE_Proactor is shown in Figure 8.8 (page 288). This class has a rich interface that exports all the features in the ACE Proactor framework. We therefore group its method descriptions into the four categories described below. Figure 8.8. The ACE_Proactor Class
1. Life cycle management methods. The following methods initialize, destroy, and access an ACE_Proactor :
The ACE_Proactor can be used in two ways:
2. Event loop management methods. Inversion of control is supported by the ACE Proactor framework. Similar to the ACE Reactor framework, ACE_Proactor implements the following event loop methods that control application completion handler dispatching:
The ACE Proactor event loop is separate from that provided by the ACE Reactor framework. To use both ACE Reactor and ACE Proactor event loops in the same application and remain portable across all asynchronous I/O platforms, the two event loops must be executed in separate threads. However, the Windows implementation of ACE_Proactor can register its I/O completion port handle with an ACE_WFMO_Reactor instance to tie the two event loop mechanisms together, allowing them to both be used in one thread. Sidebar 58 (page 290) describes how to do this. 3. Timer management methods. By default, ACE_Proactor uses the ACE_Timer_Heap timer queue mechanism described in Section 3.4 to schedule and dispatch event handlers in accordance to their timeout deadlines. The timer management methods exposed by the ACE_Proactor include:
When a timer expires , ACE_Handler::handle_time_out() (page 271) is dispatched on the registered handler. 4. I/O operation facilitator methods. The ACE_Proactor class has visibility into the platform's asynchronous I/O implementation details that are useful for operation initiation and completion event processing. Making ACE_Proactor the central mediator of platform-specific knowledge prevents coupling between the classes in the ACE Proactor framework. In particular, the ACE Proactor framework uses the Bridge pattern [GoF]. ACE_Asynch_Read_Stream and ACE_Asynch_Write_Stream use the Bridge pattern to access flexible implementations of their I/O operation factories that are specific to the OS platform. Since ACE_Proactor is the mediator of this platform-specific knowledge, it defines the following methods used by the ACE_Asynch_Read_Stream and ACE_Asynch_Write_Stream classes:
As seen in Figure 8.8, the ACE_Proactor class refers to an object of type ACE_Proactor_Impl , similar to the design of ACE Reactor framework shown in Figure 4.1 (page 89). All work dependent on platform-specific mechanisms is forwarded to the Proactor implementation class for handling. We briefly describe the platform-specific ACE Proactor framework implementations below. The ACE_WIN32_Proactor ClassACE_WIN32_Proactor is the ACE_Proactor implementation on Windows. This class works on Windows NT 4.0 and newer Windows platforms, such as Windows 2000 and Windows XP. It doesn't work on Windows 95, 98, Me, or CE, however, since these platforms don't support asynchronous I/O. Implementation overview. ACE_WIN32_Proactor uses an I/O completion port for completion event detection. When initializing an asynchronous operation factory, such as ACE_Asynch_Read_Stream or ACE_Asynch_Write_Stream , the I/O handle is associated with the Proactor's I/O completion port. In this implementation, the Windows GetQueuedCompletionStatus() function paces the event loop.
All the Result classes defined for use with the ACE_WIN32_Proactor are derived from the Windows OVERLAPPED structure. Additional information is added to each depending on the operation being performed. When GetQueuedCompletionStatus() returns a pointer to the completed operation's OVERLAPPED structure, the ACE_WIN32_Proactor converts it to a pointer to a Result object. This design allows completions to be dispatched efficiently to the correct ACE_Handler -derived completion handler when I/O operations complete. The Implementation section of the Proactor pattern in POSA2 illustrates how to implement a proactor using the Windows asynchrony mechanisms and the Asynchronous Completion Token pattern. Concurrency considerations. Multiple threads can execute the event loop of an ACE_WIN32_Proactor simultaneously . Since all event registration and dispatching is handled by the I/O completion port mechanism, and not by the ACE_WIN32_Proactor itself, there's no need to synchronize access to registration- related data structures, as in the ACE_WFMO_Reactor implementation (page 106). The timer queue expiry management is handled in a separate thread that's managed by the ACE_WIN32_Proactor . When a timer expires, the timeout mechanism uses the PostQueuedCompletionStatus() function to post a completion to the proactor's I/O completion port. This design cleanly integrates the timer mechanism with the normal completion dispatching mechanism. It also ensures that only one thread is awakened to dispatch the timer since all completion-detection threads wait only for completion events and needn't worry about waking up for a scheduled timer expiration. The ACE_POSIX_Proactor ClassThe ACE Proactor implementations on POSIX systems present multiple mechanisms for initiating I/O operations and detecting their completions. Moreover, Sun's Solaris Operating Environment offers its own proprietary version of asynchronous I/O. On Solaris 2.6 and above, the performance of the Sun-specific asynchronous I/O functions is significantly higher than that of Solaris's POSIX.4 AIO implementation. To take advantage of this performance improvement, ACE also encapsulates this mechanism in a separate set of classes. Implementation overview. The POSIX implementations of asynchronous I/O use a control block ( struct aiocb ) to identify each asynchronous I/O request and its controlling information. Each aiocb can be associated with only one I/O request at a time. The Sun-specific asynchronous I/O uses an additional structure named aio_result_t . Although the encapsulated POSIX asynchronous I/O mechanisms support read() and write() operations, they don't support any TCP / IP connection-related operations. To support the functions of ACE_Asynch_Acceptor , and ACE_Asynch_Connector ,a separate thread is used to perform connection-related operations. This asynchronous connection emulation is described in Sidebar 57 (page 283). The three variants of the ACE_POSIX_Proactor implementation are described in the following table:
Concurrency considerations. The ACE_POSIX_SIG_Proactor is the default proactor implementation on POSIX.4 AIO-enabled platforms. Its completion event demultiplexing mechanism uses the sigtimedwait() function. Each ACE_POSIX_SIG_Proactor instance can specify the set of signals to use with sigtimedwait() .To use multiple threads at different priorities with different ACE_POSIX_SIG_Proactor instances, therefore, each instance should use a different signal, or set of signals. Limitations and characteristics of some platforms directly affect which ACE_POSIX_Proactor implementation can be used. On Linux, for instance, threads are actually cloned processes. Since signals can't be sent across processes, and asynchronous I/O operations and Proactor timer expirations are both implemented using threads, the ACE_POSIX_SIG_Proactor doesn't work well on Linux. Thus, ACE_POSIX_AIOCB_Proactor is the default proactor implementation on Linux. The aio_suspend() demultiplexing mechanism used in ACE_POSIX_AIOCB_Proactor is thread safe, so multiple threads can run its event loop simultaneously. ExampleThe AIO_CLD_Connector reconnection mechanism was mentioned in the discussion of the AIO_Output_Handler::handle_read_stream() method (page 276). The reconnection mechanism is initiated using the AIO_CLD_Connector:: reconnect () method below: int reconnect (void) { return connect (remote_addr_); } This method simply initiates a new asynchronous connection request to the server logging daemon. An exponential backoff strategy is used to avoid continually initiating connection attempts when, for example, there's no server logging daemon listening. We use the following validate_connection() hook method to insert application-defined behavior into ACE_Asynch_Connector 's connection completion handling. This method learns the disposition of each asynchronous connection request and schedules a timer to retry the connect if it failed. If the connect succeeded, this method executes the SSL authentication with the server logging daemon. 1 int AIO_CLD_Connector::validate_connection 2 (const ACE_Asynch_Connect::Result &result, 3 const ACE_INET_Addr &remote, const ACE_INET_Addr &) { 4 remote_addr_ = remote; 5 if (!result.success ()) { 6 ACE_Time_Value delay (retry_delay_); 7 retry_delay_ *= 2; 8 if (retry_delay_ > MAX_RETRY_DELAY) 9 retry_delay_ = MAX_RETRY_DELAY; 10 proactor ()->schedule_timer (*this, 0, delay); 11 return -1; 12 } 13 retry_delay_ = INITIAL_RETRY_DELAY; 14 15 if (ssl_ctx_ == 0) { 16 OpenSSL_add_ssl_algorithms (); 17 ssl_ctx_ = SSL_CTX_new (SSLv3_client_method ()); 18 if (ssl_ctx_ == 0) return -1; 19 20 if (SSL_CTX_use_certificate_file (ssl_ctx_, 21 CLD_CERTIFICATE_FILENAME, 22 SSL_FILETYPE_PEM) <= 0 23 SSL_CTX_use_PrivateKey_file (ssl_ctx_, 24 CLD_KEY_FILENAME, 25 SSL_FILETYPE_PEM) <= 0 26 !SSL_CTX_check_private_key (ssl_ctx_)) { 27 SSL_CTX_free (ssl_ctx_); 28 ssl_ctx_ = 0; 29 return -1; 30 } 31 ssl_ = SSL_new (ssl_ctx_); 32 if (ssl_ == 0) { 33 SSL_CTX_free (ssl_ctx_); ssl_ctx_ = 0; 34 return -1; 35 } 36 } 37 38 SSL_clear (ssl_); 39 SSL_set_fd 40 (ssl_, ACE_reinterpret_cast (int, result.connect_handle())); 41 42 SSL_set_verify (ssl_, SSL_VERIFY_PEER, 0); 43 44 if (SSL_connect (ssl_) == -1 45 SSL_shutdown (ssl_) == -1) return -1; 46 return 0; 47 } Line 4 Save the peer address we tried to connect to so it can be reused for future connection attempts, if needed. Lines 5 “12 If the connect operation failed, set a timer to retry the connection later, then double the retry timer, up to the number of seconds specified by MAX _ RETRY _ DELAY . Line 13 If the connection succeeded, we reset the retry_delay_ back to its initial value for the next reconnection sequence. Lines 15 “46 The rest of validate_connection() is similar to TPC_Logging_Acceptor::open() (page 224), so we don't explain it here. If the SSL authentication fails, validate_connection() returns -1, causing ACE_Asynch_Connector to close the new connection before opening a service handler for it. Note that the SSL function calls on lines 43and 44 are synchronous, and therefore the proactor event loop is not processing completion events while these calls are being made. As with the Reactor framework, developers must be aware of this type of delay in a callback method. When a timer set by the handle_connection() method expires, the following handle_time_out() hook method is called by the ACE Proactor framework: void AIO_CLD_Connector::handle_time_out (const ACE_Time_Value &, const void *) { connect (remote_addr_); } This method simply initiates another asynchronous connect() attempt which will trigger another call to validate_connection() , regardless of whether it succeeds or fails. The AIO client logging daemon service is represented by the following AIO_Client_Logging_Daemon class: class AIO_Client_Logging_Daemon: public ACE_Task<ACE_NULL_SYNCH> { protected: ACE_INET_Addr cld_addr_; // Our listener address. ACE_INET_Addr sld_addr_; // The logging server's address. // Factory that passively connects the <AIO_Input_Handler>. AIO_CLD_Acceptor acceptor_; public: // Service Configurator hook methods. virtual int init (int argc, ACE_TCHAR *argv[]); virtual int fini (); virtual int svc (void); }; This class is similar to the AC_Client_Logging_Daemon class (page 251). The primary difference is that AIO_Client_Logging_Daemon spawns a new thread to run the proactor event loop that the service depends on, whereas AC_Client_Logging_Daemon relies on the main program thread to run the singleton reactor's event loop. To activate this thread easily, AIO_Client_Logging_Daemon derives from ACE_Task rather than ACE_Service_Config . We start a new thread for the proactor event loop because we do still rely on the reactor event loop for service reconfiguration activity as described in Chapter 5. If our service was designed solely for Windows, we could integrate the proactor and reactor event loops, as described in Sidebar 58 (page 290). However, this client logging daemon implementation is portable to all AIO-enabled ACE platforms. After our AIO_Client_Logging_Daemon::init() method processes the argument list and forms addresses, it then calls ACE_Task::activate() to spawn a thread running the following svc() method: 1 int AIO_Client_Logging_Daemon::svc (void) { 2 if (acceptor_.open (cld_addr_) == -1) return -1; 3 if (CLD_CONNECTOR::instance ()->connect (sld_addr_) == 0) 4 ACE_Proactor::instance ()->proactor_run_event_loop (); 5 acceptor_.close (); 6 CLD_CONNECTOR::close (); 7 OUTPUT_HANDLER::close (); 8 return 0; 9 } Lines 2 “3 Initialize the acceptor_ object to begin listening for logging client connections and initiate the first connection attempt to the server logging daemon. Line 4 Call ACE_Proactor::proactor_run_event_loop() to handle asynchronous operation completions. The proactor event loop is terminated when the service is shut down via the following fini() method: int AIO_Client_Logging_Daemon::fini () { ACE_Proactor::instance ()->proactor_end_event_loop (); wait (); return 0; } This method calls ACE_Proactor::proactor_end_event_loop() which ends the event loop in the svc() method. It then calls ACE_Task::wait() to wait for the svc() method to complete and exit its thread. Lines 5 “7 Close all open connections and singleton objects and exit the svc() thread. Lastly, we add the necessary ACE _ FACTORY _ DEFINE macro to generate the service's factory function: ACE_FACTORY_DEFINE (AIO_CLD, AIO_Client_Logging_Daemon) Our new proactive client logging daemon service uses the ACE Service Configurator framework to configure itself into any main program, such as the Configurable_Logging_Server (page 147) by including the following entry in a svc.conf file: dynamic AIO_Client_Logging_Daemon Service_Object * AIO_CLD:_make_AIO_Client_Logging_Daemon() "-p $CLIENT_LOGGING_DAEMON_PORT" |
Ru-Brd |