3.5 The ACE_Reactor Class

Ru-Brd

Motivation

Event-driven networked applications have historically been programmed using native OS mechanisms, such as the Socket API and the select() synchronous event demultiplexer . Applications developed this way, however, are not only nonportable, they are inflexible because they tightly couple low-level event detection, demultiplexing, and dispatching code together with application event processing code. Developers must therefore rewrite all this code for each new networked application, which is tedious , expensive, and error prone. It's also unnecessary because much of event detection, demultiplexing , and dispatching can be generalized and reused across many networked applications.

One way to address these problems is to combine skilled object-oriented design with networked application domain experience to produce a set of framework classes that separates application event handling code from the reusable event detection, demultiplexing, and dispatching code in the framework. Sections 3.2 through 3.4 laid the groundwork for this framework by describing reusable time value and timer queue classes, and by defining the interface between framework and application event processing code with the ACE_Event_Handler class. This section describes how the ACE_Reactor class at the heart of the ACE Reactor framework defines how applications can register for, and be notified about, events from multiple sources.

Class Capabilities

ACE_Reactor implements the Facade pattern [GoF] to define an interface that applications can use to access the various ACE Reactor framework features. This class provides the following capabilities:

  • It centralizes event loop processing in a reactive application.

  • It detects events via an event demultiplexer, such as select() or WaitForMul tipleObjects() , provided by the OS and used by the reactor implementation.

  • It demultiplexes events to event handlers when the event demultiplexer indicates the occurrence of the designated events.

  • It dispatches the appropriate hook methods on registered event handlers to perform application-defined processing in response to the events.

  • It ensures that any thread can change a Reactor's event set or queue a callback to an event handler and expect the Reactor to act on the request promptly.

The interface for ACE_Reactor is shown in Figure 3.6 (page 72). This class has a rich interface that exports all the features in the ACE Reactor framework. We therefore group its method descriptions into the six categories described below.

1. Reactor initialization and destruction methods. The following methods initialize and destroy an ACE_Reactor :

Method

Description

ACE_Reactor() open ()

These methods create and initialize instances of a reactor.

ACE_Reactor() close()

These methods clean up the resources allocated when a reactor was initialized .

The ACE_Reactor class isolates a variety of demultiplexing mechanisms behind the stable interface discussed in this chapter. To partition the different mechanisms in an easy-to-use and easy-to-maintain way, the ACE_Reactor class uses the Bridge pattern [GoF] to separate its implementations from its class interface. This design allows users to substitute a specific reactor implementation when the default isn't appropriate.

The ACE_Reactor constructor can optionally be passed a pointer to the implementation used to detect and demultiplex events and to dispatch the methods on the appropriate event handlers. The ACE_Select_Reactor described in Section 4.2 is the default implementation of the ACE_Reactor on most platforms. The exception is Windows, which defaults to ACE_WFMO_Reactor for the reasons described in Sidebar 25 (page 105).

The ACE_Reactor::open() method can be passed:

  • The number of I/O handles and event handlers managed by the reactor. The default varies according to the reactor implementation, as described in Sidebar 20 (page 92).

  • The type of timer queue implementation the reactor will use. The default is the ACE_Timer_Heap described in Section 3.4 (page 64).

Although ACE offers " full-featured " class constructors, they're best used in error-free or prototype situations where error checking is not important (see Item 10 in [Mey96]). The preferred usage in ACE is a separate call to open() (and close() in the case of object cleanup) methods. This preference stems from the ability of open() and close()

Figure 3.6 The ACE_Reactor Class
  ACE_Reactor   # reactor_ : ACE_Reactor *  # implementation_ : ACE_Reactor_Impl *  + ACE_Reactor (implementation : ACE_Reactor_Impl * = 0,                 delete_implementation : int = 0)  +  open (max_handles : int, restart : int = 0,   sig_handler : ACE_Sig_Handler * = 0,   timer_queue : ACE_Timer_Queue * = 0) : int  +  close () : int  +  register_handler (handler : ACE_Event_Handler *,   mask : ACE_Reactor_Mask) : int  +  register_handler (io : ACE_HANDLE, handler : ACE_Event_Handler *,   mask : ACE_Reactor_Mask) : int  +  remove_handler (handler : ACE_Event_Handler *,   mask : ACE_Reactor_Mask) : int  +  remove_handler (io : ACE_HANDLE, mask : ACE_Reactor_Mask) : int  +  remove_handler (hs : const ACE_Handle_Set&, m : ACE_Reactor_Mask) : int  +  suspend_handler (handler : ACE_Event_Handler *) : int  +  resume_handler (handler : ACE_Event_Handler *) : int  +  mask_ops (handler : ACE_Event_handler *,   mask : ACE_Reactor_Mask, ops : int) : int  +  schedule_wakeup (handler : ACE_Event_Handler *,   masks_to_be_added : ACE_Reactor_Mask ) : int  +  cancel_wakeup (handler : ACE_Event_Handler *,   masks_to_be_cleared : ACE_Reactor_Mask) : int  +  handle_events (max_wait_time : ACE_Time_Value * = 0) : int  +  run_reactor_event_loop (event_hook : int (*)(void *) = 0) : int  +  end_reactor_event_loop () : int  +  reactor_event_loop_done () : int  +  schedule_timer (handler : ACE_Event_Handler *, arg : void *,   delay : ACE_Time_Value &,   repeat : ACE_Time_Value & = ACE_Time_Value::zero) : int  +  cancel_timer (handler : ACE_Event_Handler *,   dont_call_handle_close : int = 1) : int  +  cancel_timer (timer_id : long, arg : void ** = 0,   dont_call_handle_close : int = 1) : int  +  notify (handler : ACE_Event_Handler * = 0,   mask : ACE_Reactor_Mask = ACE_Event_Handler::EXCEPT_MASK,   timeout : ACE_Time_Value * = 0) : int  +  max_notify_iterations (iterations : int) : int  +  purge_pending_notifications (handler : ACE_Event_Handler *,   mask : ACE_Reactor_Mask = ALL_EVENTS_MASK) : int   + instance () : ACE_Reactor *  +  owner (new_owner : ACE_thread_t, old_owner : ACE_thread_t * = 0) : int  
to return error indications , whereas ACE constructors and destructors don't throw native C++ exceptions. The motivation for avoiding native C++ exceptions in ACE's design is discussed in Section A.6 of C++NPv1. The ACE_Svc_Handler class discussed in Section 7.2 of this book closes the underlying socket handle automatically.

The ACE_Reactor destructor and close() methods release all the resources used by a reactor. This shutdown process involves calling the handle_close() hook method on all event handlers associated with handles that remain registered with a reactor. Any scheduled timers are deleted without notice and any notifications that are buffered in the reactor's notification mechanism (page 77) are lost when a reactor is closed.

2. Event handler management methods. The following methods register and remove event handlers from an ACE_Reactor :

Method

Description

register_handler()

Register an event handler for I/O- and signal-based events.

remove_handler()

Remove an event handler from I/O-based and signal-based event dispatching.

suspend_handler()

Temporarily prevent dispatching events to an event handler.

resume_handler()

Resume event dispatching for a previously suspended handler.

mask_ops()

Get, set, add, or clear the event type(s) associated with an event handler and its dispatch mask.

schedule_wakeup()

Add the designated masks to an event handler's entry, which must have been registered previously via register_handler() .

cancel_wakeup()

Clear the designated masks from an event handler's entry, but don't remove the handler from the reactor.

The ACE_Reactor 's registration and removal methods offer multiple overloaded signatures to facilitate their use in many different situations. For example, the register_handler() methods can be used with any of the following signatures:

  • (ACE_Event_Handler *, ACE_Reactor_Mask) ” In this version, the first parameter identifies the application's event handler and the second indicates the type of event(s) the handler is prepared to process. The method's implementation uses double-dispatching [GoF] to obtain a handle via the handler's get_handle() method. The advantage of this design is that application code need not obtain nor expose an I/O handle explicitly, which prevents accidental association of the wrong handle with an event handler. Most examples in this book therefore use this variant of register_handler() .

  • (ACE_HANDLE, ACE_Event_Handler *, ACE_Reactor_Mask) ” In this version, a new first parameter is added to explicitly specify the I/O handle associated with the application's event handler. This design is potentially more error-prone than the two-parameter version above since callers can accidentally pass an I/O handle that doesn't match the event handler. However, it allows an application to register multiple I/O handles for the same event handler, which is necessary for handlers that must be associated with multiple IPC objects. This method can also be used to conserve memory if a single event handler can process events from many unrelated I/O streams that don't require maintenance of per-handle state. The client logging daemon example in the Example portion of Section 6.2 illustrates the three parameter variant of register_handler() .

  • (const ACE_Sig_Set &sigset, ACE_Event_Handler *new_sh, ACE_ Sig_Action *new_disp) ” In this version, a new event handler ( new_sh )is specified to handle a set of POSIX signals. When any signal in sigset is raised, the reactor will call the handle_signal() hook method on the associated event handler. Unlike other callbacks from the reactor, handle_signal() is called in signal context. Its actions are therefore restricted to a subset of available system calls. Developers are advised to check their OS platform documentation for details.

The ACE_Reactor::remove_handler() methods can be used to remove event handlers from a reactor so that they are no longer registered for one or more types of I/O events or signals. There are variants with and without an explicit handle specification (just like the first two register_handler() method variants described above). One variant accepts an ACE_Handle_Set to remove a number of handles at once; the other accepts an ACE_Sig_Set to remove signals from reactor handling. The ACE_Reactor::cancel_timer() method (page 76) must be used to remove event handlers that are scheduled for timer events.

When an application calls one of the ACE_Reactor::remove_handler() methods for I/O event removal, it can pass a bit mask consisting of the enumeration literals defined in the table on page 50. This bit mask indicates which I/O event types are no longer of interest. The event handler's handle_close() method is subsequently called to notify it of the removal. After handle_close() returns and the event handler is no longer registered to handle any I/O events, the ACE_Reactor removes the event handler from its internal I/O event demultiplexing data structures.

An application can prevent handle_close() from being called back by adding the ACE_Event_Handler::DONT_CALL flag to remove_handler() 's mask parameter. This flag instructs a reactor not to dispatch the handle_close() method when removing an event handler, as shown in the Service_Reporter::fini() method (page 135). To ensure a reactor won't invoke handle_close() in an infinite recursion, the DONT_CALL flag should always be passed to remove_handler() when it's called from within the handle_close() hook method itself.

By default, the handle_close() hook method is not called when canceling timers via the cancel_timer() method. However, an optional final argument can be supplied to request that handle_close() be called. The handle_close() method is not called when removing an event handler from signal handling.

The suspend_handler() method can be used to remove a handler or set of handlers temporarily from the reactor's handle-based event demultiplexing activity. The resume_handler() method reverts the actions of suspend_handler() so that the handle(s) are included in the set of handles waited upon by the reactor's event demultiplexer. Since suspend_handler() and resume_handler() affect only I/O handle-based dispatching, they have no effect on timers, signal handling, or notifications.

The mask_ops() method performs operations that get, set, add, or clear the event type(s) associated with an event handler's dispatch mask. The mask_ops() method assumes that an event handler is already present and doesn't try to register or remove it. It's therefore more efficient than using register_handler() and remove_handler() . The schedule_wakeup() and cancel_wakeup() methods are simply "syntactic sugar" for common operations involving mask_ops() . They help prevent subtle errors, however, such as replacing a mask when adding bits was intended. For example, the following mask_ops() calls enable and disable the ACE_Event_Handler::WRITE_MASK :

 ACE_Reactor::instance ()->mask_ops    (handler, ACE_Event_Handler::WRITE_MASK, ACE_Reactor::ADD_MASK);  // ...  ACE_Reactor::instance ()->mask_ops    (handler, ACE_Event_Handler::WRITE_MASK, ACE_Reactor::CLR_MASK); 

These calls can be replaced by the following more concise and informative method calls:

 ACE_Reactor::instance ()->schedule_wakeup    (handler, ACE_Event_Handler::WRITE_MASK);  // ...  ACE_Reactor::instance ()->cancel_wakeup    (handler, ACE_Event_Handler::WRITE_MASK); 

3. Event-loop management methods. Inversion of control is a key capability offered by the ACE Reactor framework. Similar to other frameworks, such as the X Windows Toolkit or Microsoft Foundation Classes (MFC), ACE_Reactor implements the event loop that controls when application event handlers are dispatched. After registering its initial event handlers, an application can manage its event loop via methods in the following table:

Method

Description

handle_events()

Waits for an event to occur and then dispatches the associated event handler(s). A timeout parameter can limit the time spent waiting for an event.

run_reactor_event_loop()

Calls the handle_events() method repeatedly until it fails, reactor_event_loop_done() returns 1, or an optional timeout occurs.

end_reactor_event_loop()

Instructs a reactor to shut down its event loop.

reactor_event_loop_done()

Returns 1 when the reactor's event loop has been ended via a call to end_reactor_event_loop() .

The handle_events() method gathers the handles of all registered event handlers, passes them to the reactor's event demultiplexer, and blocks for up to an application-specified time interval awaiting the occurrence of an event, such as I/O activity or timer expiration. When an event occurs, this method dispatches the appropriate preregistered event handlers by invoking their handle_*() hook method(s) defined by the application to process the event(s). If more than one event occurs, they are all dispatched before returning. The return value indicates the number of events processed , 0 if no events occurred before the caller-specified timeout, or -1 if an error occurred.

The run_reactor_event_loop() method is a simple wrapper around handle_events() . It runs the event loop continually, calling handle_events() until either

  • An error occurs

  • The time designated in the optional ACE_Time_Value elapses

  • The end_reactor_event_loop() method is called ”possibly from within one of the event handling callbacks ”to end the event loop

Applications without specialized event handling needs often use run_reactor_event_loop() and end_reactor_event_loop() to handle their event loops because these methods detect and handle errors automatically.

Many networked applications run a reactor's event loop in a single thread of control. Sections 4.3 and 4.4 describe how the ACE_TP_Reactor and ACE_WFMO_Reactor classes allow multiple threads to call their event loop methods concurrently.

4. Timer management methods. By default, ACE_Reactor uses the ACE_Timer_Heap timer queue mechanism described in Section 3.4 to schedule and dispatch event handlers in accordance to their timeout deadlines. The timer management methods exposed by the ACE_Reactor include:

Method

Description

schedule_timer()

Registers an event handler that will be executed after a user -specified amount of time.

cancel_timer()

Cancels one or more timers that were previously registered.

The ACE_Reactor codifies the proper usage of the ACE timer queue functionality in the context of handling a range of event types including I/O and timers. In fact, most users interact with the ACE timer queues only via the ACE_Reactor , which integrates the ACE timer queue functionality into the Reactor framework as follows :

  • The schedule_timer() method allows users to specify timers using relative time, which is generally easier to work with than the absolute times the ACE timer queues use. This method uses the timer queue's gettimeofday() mechanism to adjust user-specified times automatically to the time method used by the timer queue.

  • The handle_events() method queries the timer queue to find the expiration time of the earliest timer. It then uses this value to limit the amount of time the event demultiplexer waits for I/O events.

  • The handle_events() method will call the timer queue methods to expire timers when the event demultiplexer times out after the time for the earliest timer arrives.

Together, these actions effectively integrate timers into the ACE Reactor framework in an easy-to-use way that allows applications to reuse the ACE timer queue capabilities without interacting with timer queue methods directly. Sidebar 16 describes how to minimize dynamic memory allocations in ACE timer queues.

Sidebar 16: Minimizing Memory Allocations in ACE Timer Queues

The ACE_Timer_Queue base class depicted in Figure 3.5 (page 64) offers no method to set the size of a timer queue. This omission is deliberate because there's no uniform meaning for "size" at that level of the class hierarchy. Each timer queue subclass has a different meaning for "size" that's related to its underlying data structures. The timer queue subclasses therefore offer size- related parameters in their constructors. These parameters are hints instructing the timer queue implementation how large to make its initial internal data structures. Although the timer queues resize automatically to accomodate arbitrarily large numbers of timers, resizing involves dynamic memory allocation, which can introduce overhead that's prohibitive for some applications.

In addition to sizing the queue, the ACE_Timer_Heap and ACE_Timer_Wheel classes offer the ability to preallocate timer queue entries so the queue can avoid any subsequent dynamic memory allocation. To make the ACE_Reactor use a custom- tuned queue for its timer operations you simply need to do the following:

  1. Instantiate the desired ACE timer queue class, specifying the desired size and preallocation argument, if applicable .

  2. Instantiate an ACE reactor implementation object, specifying the timer queue from step 1.

  3. Instantiate a new ACE_Reactor object, supplying the implementation object from step 2.

5. Notification methods. A reactor has a notification mechanism that applications can use to insert events and event handlers into a reactor's dispatching engine. The following methods manage various aspects of a reactor's notification mechanism:

Method

Description

notify()

Inserts an event (and an optional event handler) into the reactor's event detector, which causes it to be processed when the reactor next waits for events.

max_notify_iterations()

Sets the maximum number of handlers a reactor will dispatch from its notification mechanism.

purge_pending_notifications()

Purges a specified event handler or all event handlers from the reactor's notification mechanism.

The ACE_Reactor::notify() method can be used for several purposes:

  • The reactor notification mechanism enables other threads to wake up a reactor's owner thread whose event demultiplexer function is blocked waiting for I/O events to occur. For example, since mask_ops() , schedule_wakeup() , and cancel_wakeup() don't cause the reactor to reexamine its set of handles and handlers, any new masks will only be noticed the next time a reactor's handle_events() method is called. If no other activity is expected shortly, or if the wait masks should be reexamined immediately, ACE_Reactor::notify() can be called to force a reactor to reexamine its set of handles and handlers.

  • The notify() method can be passed an event handler pointer and one of the ACE_Reactor_Mask values, such as READ_MASK , WRITE_MASK , or EXCEPT_MASK . These parameters trigger the reactor to dispatch the corresponding event handler hook method (outlined in the table on page 50) without needing to associate the handler with I/O handles or timer events. This feature enables the reactor to scale to an open-ended number of event handlers since there's no requirement that a handler whose pointer is passed to ACE_Reactor::notify() has ever been, or ever will be, registered with that reactor.

By default, a reactor dispatches all event handlers in its notification mechanism after detecting a notification event. The max_notify_iterations() method can change the number of event handlers dispatched. Setting a low value improves fairness and prevents starvation , though it increases dispatching overhead somewhat.

Sidebar 17: Avoiding Reactor Notification Mechanism Deadlock

By default, the reactor notification mechanism is implemented with a bounded buffer and notify() uses a blocking send call to insert notifications into the queue. A deadlock can therefore occur if the buffer is full and notify() is called by a handle_*() method of an event handler. There are several ways to avoid such deadlocks:

  • Pass a timeout to the notify() method. This solution pushes the responsibility for handling buffer overflow to the thread that calls notify() .

  • Design the application so that it doesn't generate calls to notify() faster than a reactor can process them. This is ultimately the best solution, though it requires careful analysis of program behavior.

Sidebar 22 (page 94) describes a way to avoid ACE_Select_Reactor deadlocks.

Notifications to the reactor are queued internally while waiting for the reactor to dispatch them (Sidebar 17 discusses how to avoid deadlock on a queue). If an event handler associated with a notification is invalidated before the notification is dispatched, a catastrophic failure can occur when the reactor tries to dispatch an invalid event handler pointer. The purge_pending_notifications() method can therefore be used to remove any notifications associated with an event handler from the queue. The ACE Reactor framework assists users by calling purge_pending_notifications() from the ACE_Event_Handler destructor. This behavior is inherited by all application event handlers because the destructor is declared virtual .

Notifications remain in a queue until they are dispatched or purged by an event handler, which ensures that a notification will be processed even if the reactor is busy processing other events at the time that notify() is called. However, if the reactor ceases to detect and dispatch events (e.g., after run_reactor_event_loop() returns), any queued notifications remain and will not be dispatched unless and until the reactor is directed to detect and dispatch events again. Notifications will therefore be lost if the reactor is closed or deleted before dispatching the queued notifications. Applications are responsible for deciding when to terminate event processing, and no events from any source will be detected , demultiplexed, or dispatched after that time.

6. Utility methods. The ACE_Reactor class also defines the following utility methods:

Method

Description

instance()

A static method that returns a pointer to a singleton ACE_Reactor , which is created and managed by the Singleton pattern [GoF] combined with the Double-Checked Locking Optimization pattern [POSA2].

owner()

Assigns a thread to "own" a reactor's event loop.

The ACE_Reactor can be used in two ways:

  • As a singleton [GoF] via the instance() method shown in the table above.

  • By instantiating one or more instances. This capability can be used to support multiple reactors within a process. Each reactor is often associated with a thread running at a particular priority [Sch98].

Some reactor implementations, such as the ACE_Select_Reactor described in Section 4.2, only allow one thread to run their handle_events() method. The owner() method changes the identity of the thread that owns the reactor to allow this thread to run the reactor's event loop. Sidebar 18 (page 80) describes how to avoid deadlock when using a reactor in multithreaded applications.

Figure 3.8 (page 85) presents a sequence diagram of the interactions among classes in the ACE Reactor framework. Additional coverage of the ACE Reactor framework's design appears in the Reactor pattern's Implementation section in Chapter 3 of POSA2.

Figure 3.8. UML Sequence Diagram for the Reactive Logging Server

Example

Before we show the rest of the reactive networked logging server, let's quickly review the external behavior of the logging server and client developed in C++NPv1. The logging server listens on a TCP port number specified on the command line, defaulting to the port number specified as ace_logger in the OS network services file. For example, the following line might appear in the UNIX /etc/services file:

Sidebar 18: Avoiding Reactor Deadlock in Multithreaded Applications

Although reactors are often used in single-threaded applications, they can also be used in multithreaded applications. In this context, it's important to avoid deadlock between multiple threads that are sharing an ACE_Reactor . For example, an ACE_Reactor holds a recursive mutex when it dispatches a callback to an event handler. If the dispatched callback method directly or indirectly calls back into the reactor within the same thread of control, the recursive mutex's acquire() method detects this automatically and simply increases its count of the lock recursion nesting depth, rather than deadlocking the thread.

Even with recursive mutexes , however, it's still possible to incur deadlock under the following circumstances:

  • The original callback method calls a second method that blocks trying to acquire a mutex that's held by a second thread executing the same method.

  • The second thread directly or indirectly calls into the same reactor.

In this case, deadlock can occur since the reactor's recursive mutex doesn't realize that the second thread is calling on behalf of the first thread where the callback method was dispatched originally.

One way to avoid ACE_Reactor deadlock in a multithreaded application is to not make blocking calls to other methods from callbacks if those methods are executed concurrently by competing threads that directly or indirectly call back into the same reactor. It may be necessary to use an ACE_Message_Queue described in Section 6.2 to exchange information asynchronously if a handle_*() callback method must communicate with another thread that accesses the same reactor.

 ace_logger     9700/tcp     # Connection-oriented Logging Service 

Client applications can optionally specify the TCP port and the IP host name or address where the client application and logging server should rendezvous to exchange log records. If this information isn't specified, however, the port number is located in the services database, and the hostname is assumed to be the ACE _ DEFAULT _ SERVER _ HOST , which is defined as " localhost " on most OS platforms.

The version of the logging server shown below offers the same capabilities as the Reactive_Logging_Server_Ex version in Chapter 7 of C++NPv1. Both servers run in a single thread of control in a single process, handling log records from multiple clients reactively. The main difference is that the version described here reuses the event detection, demultiplexing, and dispatching capabilities from the ACE Reactor framework. This refactoring removes the following application-independent code from the original Reactive_Logging_Server_Ex implementation:

Handle-to-object mapping. Two data structures in Reactive_Logging_Server_Ex performed the following mappings:

  1. An ACE_Handle_Set contained all the socket handles for connected clients and the ACE_SOCK_Acceptor handle for accepting new client connections.

  2. An ACE_Hash_Map_Manager mapped socket handles to loosely associated ACE_FILE_IO objects, which write log records to the appropriate output file.

Since the ACE Reactor framework now provides and maintains the code that manages handle-to-object mappings, the resulting application is smaller, faster, and makes much better use of the reusable software artifacts available in ACE.

Event detection, demultiplexing, and dispatching. To detect both connection and data events, the Reactive_Logging_Server_Ex server used the ACE::select() synchronous event demultiplexer method. This design had the following drawbacks, however:

  1. It worked only as long as the OS provided select() .

  2. It worked well only as long as the OS implemented select() efficiently .

  3. The code that called ACE::select() and processed the resulting handle sets was hard to reuse for other applications.

The new logging server reuses the ACE Reactor framework's ability to portably and efficiently detect, demultiplex, and dispatch I/O- and time-based events. This framework also allows the application to integrate signal handling if the need arises.

With the application-independent code described above removed, an important maintenance problem with the original code is revealed. Although the code worked correctly, the Reactive_Logging_Server_Ex , Logging_Handler , handle-to- ACE_FILE_IO map and ACE_FILE_IO objects were loosely cohesive and tightly coupled . Changing the event handling mechanism therefore also required changes to all of the application-specific event handling code, which illustrates the negative effects of tangling application-specific code with (what should be) application-independent code. This resulted in a design that was hard to extend and maintain, which would add considerable cost to the logging server as it evolved over time.

In contrast, the Logging_Event_Handler class (page 56) shows how the new reactive logging server separates concerns more effectively by combining ACE_FILE_IO with a Logging_Handler and registering the socket's handle with the reactor. This example show the following steps that developers can use to integrate applications with the ACE Reactor framework:

  1. Create event handlers by inheriting from the ACE_Event_Handler base class and overriding its virtual methods to handle various types of events.

  2. Register event handlers with an instance of ACE_Reactor .

  3. Run an event loop that demultiplexes and dispatches events to the event handlers.

Figure 3.7 illustrates the reactive logging server architecture that builds on our earlier implementations from C++NPv1. This architecture enhances reuse and extensibility by decoupling the following aspects of the logging server:

Figure 3.7. Architecture of the ACE_Reactor Logging Server

  • ACE Reactor framework classes. These classes encapsulate the lower-level OS mechanisms that perform event detection and the demultiplexing and dispatching of events to event handler hook methods.

  • ACE Socket wrapper facade classes. The ACE_SOCK_Acceptor and ACE_SOCK_Stream classes presented in Chapter 3 of C++NPv1 are used in this version of the logging server. As in previous versions, the ACE_SOCK_Acceptor accepts network connections from remote clients and initializes ACE_SOCK_Stream objects. An initialized ACE_SOCK_Stream object then processes data exchanged with its connected client.

  • Logging event handler classes. These classes implement the capabilities specific to the networked logging service. As shown in the Example portion of Section 3.4, the Logging_Acceptor_Ex factory uses an ACE_SOCK_Acceptor to accept client connections. Likewise, the Logging_Event_Handler_Ex uses an ACE_SOCK_Stream to receive log records from connected clients. Both Logging_* classes are descendants of ACE_Event_Handler , so their handle_input() methods can receive callbacks from an ACE_Reactor .

Our implementation begins in a header file called Reactor_Logging_Server.h , which includes several header files that provide the various capabilities we'll use in our ACE_Reactor -based logging server.

 #include "ace/ACE.h"  #include "ace/Reactor.h" 

We next define the Reactor_Logging_Server class, which forms the basis for many subsequent logging server examples in this book:

 template <class ACCEPTOR>  class Reactor_Logging_Server : public ACCEPTOR {  public:    Reactor_Logging_Server (int argc, char *argv[], ACE_Reactor *);  }; 

This class inherits from its ACCEPTOR template parameter. To vary certain aspects of Reactor_Logging_Server 's connection establishment and logging behavior, subsequent examples will instantiate it with various types of acceptors, such as the Logging_Acceptor_Ex (pages 67, 96, and 101), the Logging_Acceptor_WFMO (page 113), the TP_Logging_Acceptor (page 193), and the TPC_Logging_Acceptor (page 227). Reactor_Logging_Server also contains a pointer to the ACE_Reactor that it uses to detect, demultiplex, and dispatch I/O- and time-based events to their event handlers.

Reactor_Logging_Server differs from the Logging_Server class defined in Chapter 4 of C++NPv1 since Reactor_Logging_Server uses the ACE_Reactor::handle_events() method to process events via callbacks to instances of Logging_Acceptor and Logging_Event_Handler . Thus, the handle_connections() , handle_data() , and wait_for_multiple_events() , hook methods used in the reactive logging servers from C++NPv1 are no longer needed.

The Reactor_Logging_Server template implementation resides in Reactor_Logging_Server_T.cpp . Its constructor performs the steps necessary to initialize the reactive logging server:

 1 template <class ACCEPTOR>   2 Reactor_Logging_Server<ACCEPTOR>::Reactor_Logging_Server   3   (int argc, char *argv[], ACE_Reactor *reactor)   4   : ACCEPTOR (reactor) {   5   u_short logger_port = argc > 1 ? atoi (argv[1]) : 0;   6   ACE_TYPENAME ACCEPTOR::PEER_ADDR server_addr;   7   int result;   8   9   if (logger_port != 0)  10     result = server_addr.set (logger_port, INADDR_ANY);  11   else  12     result = server_addr.set ("ace_logger", INADDR_ANY);  13   if (result != -1)  14     result = ACCEPTOR::open (server_addr);  15   if (result == -1) reactor->end_reactor_event_loop ();  16 } 

Line 5 Set the port number that we'll use to listen for client connections.

Line 6 Use the PEER_ADDR trait class that's part of the ACCEPTOR template parameter to define the type of server_addr . The use of traits simplifies the wholesale replacement of IPC classes and their associated addressing classes. Sidebar 19 explains the meaning of the ACE _ TYPENAME macro.

Lines 9 “12 Set the local server address server_addr .

Line 14 Pass server_addr to ACCEPTOR::open() to initialize the passive-mode endpoint and register this object with the reactor for ACCEPT events.

Line 15 If an error occured, instruct the reactor to shut its event loop down so the main() function doesn't hang.

Sidebar 19: The C++ typename Keyword and ACE_TYPENAME Macro

The C++ typename keyword tells the compiler that a symbol (such as PEER_ADDR ) is a type. This keyword is necessary when the qualifier is a template type argument (such as ACCEPTOR ) because the compiler won't have a concrete class to examine until templates are instantiated , which could be much later in the build process. Since typename is a relatively recent addition to C++, ACE provides a portable way to specify it. The ACE _ TYPENAME macro expands to the typename keyword on C++ compilers that support it and to nothing on compilers that don't.

We conclude with the logging server's main() function, which resides in Reactor_Logging_Server.cpp :

 1 typedef Reactor_Logging_Server<Logging_Acceptor_Ex>   2         Server_Logging_Daemon;   3   4 int main (int argc, char *argv[]) {   5   ACE_Reactor reactor;   6   Server_Logging_Daemon *server = 0;   7   ACE_NEW_RETURN (server,   8                   Server_Logging_Daemon (argc, argv, &reactor),   9                   1);  10  11   if (reactor.run_reactor_event_loop () == -1)  12     ACE_ERROR_RETURN ((LM_ERROR, "%p\n",  13                        "run_reactor_event_loop()"), 1);  14   return 0;  15 } 

Lines 1 “2 Instantiate the Reactor_Logging_Server template with the Logging_Acceptor_Ex class (page 67) to create the Server_Logging_Daemon typedef .

Lines 6 “9 Dynamically allocate a Server_Logging_Daemon object.

Lines 11 “13 Use the local instance of ACE_Reactor to drive all subsequent connection and data event processing until an error occurs. The ACE _ ERROR _ RETURN macro and other ACE debugging macros are described in Sidebar 10 on page 93 of C++NPv1.

Line 15 When the local reactor 's destructor runs at the end of main() it calls the Logging_Acceptor::handle_close() method (page 58) to delete the dynamically allocated Server_Logging_Daemon object. The destructor also calls Logging_Event_Handler_Ex::handle_close() (page 70) for each registered event handler to clean up the handler and shut the server down gracefully.

Figure 3.8 illustrates the interactions in the example above. Since all event detection, demultiplexing, and dispatching is handled by the ACE Reactor framework, the reactive logging server implementation is much shorter than the equivalent ones in C++NPv1. In fact, the work involved in moving from the C++NPv1 reactive servers to the current server largely involves deleting code that's no longer needed, such as handle set and handle set management, handle-to- ACE_FILE_IO mapping, synchronous event demultiplexing, and event dispatching. The remaining application-defined functionality is isolated in classes inherited from the ACE Reactor framework.

Ru-Brd


C++ Network Programming
C++ Network Programming, Volume I: Mastering Complexity with ACE and Patterns
ISBN: 0201604647
EAN: 2147483647
Year: 2002
Pages: 65

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net