Ru-Brd |
MotivationTCP / IP connection establishment is a two-step process:
This two-step process is often performed using either a reactive or synchronous I/O model, as shown in Chapter 3 of C++NPv1 and in Chapter 7 of this book. However, the initiate/complete protocol of TCP connection establishment lends itself well to the proactive model. Networked applications that benefit from asynchronous I/O can therefore also benefit from asynchronous connection establishment capabilities. OS support for asynchronous connection establishment varies. For example, Windows supports asynchronous connection establishment, whereas POSIX.4 AIO does not. It's possible, however, to emulate asynchronous connection establishment where it doesn't exist by using other OS mechanisms, such as multithreading (Sidebar 57 on page 283 discusses the ACE Proactor framework's emulation for POSIX). Since redesigning and rewriting code to encapsulate or emulate asynchronous connection establishment for each project or platform is tedious and error prone, the ACE Proactor framework provides the ACE_Asynch_Acceptor , ACE_Asynch_Connector , and ACE_Service_Handler classes. Class CapabilitiesACE_Asynch_Acceptor is another implementation of the acceptor role in the Acceptor-Connector pattern [POSA2]. This class provides the following capabilities:
ACE_Asynch_Connector plays the connector role in the ACE Proactor framework's implementation of the Acceptor-Connector pattern. This class provides the following capabilities:
Unlike the ACE Acceptor-Connector framework described in Chapter 7, these two classes only establish TCP / IP connections. As discussed in Section 8.2, the ACE Proactor framework focuses on encapsulating operations, not I/O handles, and these classes encapsulate operations to establish TCP / IP connections. Connectionless IPC mechanisms (for example, UDP and file I/O) don't require a connection setup, so they can be used directly with the ACE Proactor framework's I/O factory classes. Similar to ACE_Acceptor and ACE_Connector , ACE_Asynch_Acceptor and ACE_Asynch_Connector are template class factories that can create a service handler to execute a service on the new connection. The template parameter for both ACE_Asynch_Acceptor and ACE_Asynch_Connector is the service class the factory generates, known as ACE_Service_Handler . This class acts as the target of connection completions from ACE_Asynch_Acceptor and ACE_Asynch_Connector . ACE_Service_Handler provides the following capabilities:
Sidebar 56 (page 280) discusses the rationale behind the decision to not reuse ACE_Svc_Handler for the ACE Proactor framework. The interfaces for all three of the classes in the Proactive Acceptor-Connector mechanism are shown in Figure 8.7 (page 281). The following table outlines the key methods in the ACE_Asynch_Acceptor class: Figure 8.7. The Proactive Acceptor, Connector, and Service Handler Classes
The open() method initializes the listening TCP socket and initiates one or more asynchronous accept() operations. If the argument for reissue_accept is 1 (the default), a new accept() operation will automatically be started as needed. ACE_Asynch_Acceptor implements the ACE_Handler::handle_accept() method (Figure 8.5 on page 272) to process each accept() completion as follows
The ACE_Asynch_Connector class provides methods that are similar to those in ACE_Asynch_Acceptor and are outlined in the following table:
The open() method accepts fewer arguments than ACE_Asynch_Acceptor::open() . In particular, since addressing information can be different on each connect() operation, it's specified in parameters to the connect() method. ACE_Asynch_Connector implements ACE_Handler::handle_connect() (Figure 8.5 on page 272) to process each connection completion. The processing steps are the same as for ACE_Asynch_Acceptor , above. Each networked application service class in the ACE Proactor framework derives from ACE_Service_Handler . Its key methods are shown in the following table:
As mentioned above, ACE_Asynch_Acceptor and ACE_Asynch_Connector both call the ACE_Service_Handler::open() hook method for each new connection established. The handle argument is the handle for the connected socket. The ACE_Message_Block argument may contain data from the peer if the bytes_to_read parameter to ACE_Asynch_Acceptor::open() was greater than 0. Since this Windows-specific facility is often used with non-IP protocols (e.g., X.25), we don't discuss its use here. The ACE Proactor framework manages the ACE_Message_Block , so the service need not be concerned with it. If the service handler requires either the local or peer addresses on the new connection, it must implement the addresses() hook method to capture them when the connection is established. The ACE Proactor framework calls this method if the pass_address argument to the asynchronous connection factory was 1. This method is more significant on Windows because the connection addresses cannot be obtained any other way when asynchronous connection establishment is used.
ExampleAs with the client logging daemons in Chapters 6 and 7, the classes in the proactive implementation are separated into separate input and output roles that are explained below. Input role. The input role of the proactive client logging daemon is performed by the AIO_CLD_Acceptor and AIO_Input_Handler classes. AIO_Input_Handler was described on page 273, so here we focus on AIO_CLD_Acceptor , which derives from ACE_Asynch_Acceptor as shown in Figure 8.3 (page 267). The class definition for AIO_CLD_Acceptor is shown below: class AIO_CLD_Acceptor : public ACE_Asynch_Acceptor<AIO_Input_Handler> { public: // Cancel accept and close all clients. void close (void); // Remove handler from client set. void remove (AIO_Input_Handler *ih) { clients_.remove (ih); } protected: // Service handler factory method. virtual AIO_Input_Handler *make_handler (void); // Set of all connected clients. ACE_Unbounded_Set<AIO_Input_Handler *> clients_; }; Since the ACE Proactor framework only keeps track of active I/O operations, it doesn't maintain a set of registered handlers like the ACE Reactor framework does. Applications must therefore locate and clean up handlers when necessary. In this chapter's example, the AIO_Input_Handler objects are allocated dynamically, and they must be readily accessible when the service shuts down. To satisfy this requirement, the AIO_CLD_Acceptor::clients_ member is an ACE_Unbounded_Set that holds pointers to all active AIO_Input_Handler objects. When a logging client connects to this server, ACE_Asynch_Acceptor::handle_accept() calls the following factory method: AIO_Input_Handler * AIO_CLD_Acceptor::make_handler (void) { AIO_Input_Handler *ih; ACE_NEW_RETURN (ih, AIO_Input_Handler (this), 0); if (clients_.insert (ih) == -1) { delete ih; return 0; } return ih; } AIO_CLD_Acceptor reimplements the make_handler() factory method that keeps track of each allocated service handler's pointer in clients_ . If the new handler's pointer can't be inserted for some reason, it's deleted; returning 0 will force the ACE Proactor framework to close the newly accepted connection. The make_handler() hook method passes its object pointer to each dynamically allocated AIO_Input_Handler (page 273). When AIO_Input_Handler detects a failed read() (most likely because the logging client closed the connection), its handle_read_stream() method (page 274) simply deletes itself. The AIO_Input_Handler destructor cleans up all held resources, and calls the AIO_CLD_Acceptor::remove() method (page 283) to remove itself from the clients_ set, as shown below: AIO_Input_Handler::AIO_Input_Handler () { reader_.cancel (); ACE_OS::closesocket (handle ()); if (mblk_ != 0) mblk_->release (); mblk_ = 0; acceptor_->remove (this); } When this service shuts down in the AIO_Client_Logging_Daemon::svc() method (page 295), all the remaining AIO_Input_Handler connections and objects are cleaned up by calling the close() method below: void AIO_CLD_Acceptor::close (void) { ACE_Unbounded_Set_Iterator<AIO_Input_Handler *> iter (clients_.begin ()); AIO_Input_Handler **ih; while (iter.next (ih)) delete *ih; } This method simply iterates through all of the active AIO_Input_Handler objects, deleting each one. Output role. The output role of the proactive client logging daemon is performed by the AIO_CLD_Connector and AIO_Output_Handler classes. The client logging daemon uses the AIO_CLD_Connector to
It then uses the AIO_Output_Handler to asychronously forward log records from connected logging clients to the server logging daemon. Part of the AIO_CLD_Connector class definition is below: class AIO_CLD_Connector : public ACE_Asynch_Connector<AIO_Output_Handler> { public: enum { INITIAL_RETRY_DELAY = 3, MAX_RETRY_DELAY = 60 }; // Constructor. AIO_CLD_Connector () : retry_delay_ (INITIAL_RETRY_DELAY), ssl_ctx_ (0), ssl_ (0) { open (); } // Hook method to detect failure and validate peer before // opening handler. virtual int validate_connection (const ACE_Asynch_Connect::Result &result, const ACE_INET_Addr &remote, const ACE_INET_Addr &local); protected: // Template method to create a new handler virtual AIO_Output_Handler *make_handler (void) { return OUTPUT_HANDLER::instance (); } // Address at which logging server listens for connections. ACE_INET_Addr remote_addr_; // Seconds to wait before trying the next connect int retry_delay_; // The SSL "context" data structure. SSL_CTX *ssl_ctx_; // The SSL data structure corresponding to authenticated // SSL connections. SSL *ssl_; }; typedef ACE_Unmanaged_Singleton<AIO_CLD_Connector, ACE_Null_Mutex> CLD_CONNECTOR; The AIO_CLD_Connector class is accessed as an unmanaged singleton (see Sidebar 45 on page 194) via the CLD_CONNECTOR typedef . When AIO_CLD_Connector is instantiated , its constructor calls the ACE_Asynch_Connector::open() method. By default, the validate_connection() method (page 293) will be called on completion of each connect() attempt. |
Ru-Brd |