I l @ ve RuBoard |
MotivationAlthough C++ global variables are sometimes useful, their potential for harmful side-effects and undefined initialization semantics [LGS00] can cause subtle problems. These problems are exacerbated in multithreaded applications. In particular, when multiple threads access unsynchronized global variables simultaneously , information can get lost or the wrong information can be used. Consider the C errno variable for example. Serializing access to a single, per-process errno is pointless since it can be changed concurrently, for example, between when it's set in a system function and tested in application code. Therefore, the synchronization mechanisms outlined in Section 6.4 aren't relevant. Each thread needs its own copy of errno , which is what OS thread-specific storage (TSS) mechanisms provide. By using the TSS mechanism, no thread can see another thread's instance of errno . Unfortunately, the native TSS C APIs provided by operating systems have the following problems:
Addressing these problems in each application is tedious and error prone, which is why ACE provides the ACE_TSS class. Class CapabilitiesThe ACE_TSS class implements the Thread_Specific Storage pattern, which encapsulates and enhances the native OS TSS APIs [SSRB00]. This class provides the following capabilities:
The interface for the ACE_TSS class is shown in Figure 9.4, and its key methods are shown below:
Figure 9.4. The ACE_TSS Class Diagram
The ACE_TSS template is a proxy that transforms ordinary C++ classes into type-safe classes whose instances reside in thread-specific storage. It combines the operator-> () delegation method with other C++ features, such as templates, inlining , and overloading. In addition, it uses common synchronization patterns and idioms [SSRB00], such as Double-Checked Locking Optimization and Scoped Locking, as shown in the implementation of operator-> () below: template <class TYPE> TYPE * ACE_TSS<TYPE>::operator-> () { if (once_ == 0) { // Ensure that we're serialized. ACE_GUARD_RETURN (ACE_Thread_Mutex, guard, keylock_, 0); if (once_ == 0) { ACE_OS::thr_keycreate (&key_, &ACE_TSS<TYPE>::cleanup); once = 1; } } TYPE *ts_obj = 0; // Initialize <ts_obj> from thread-specific storage. ACE_OS::thr_getspecific (key_, (void **) &ts_obj); // Check if this method's been called in this thread. if (ts_obj == 0) { // Allocate memory off the heap and store it in a pointer. ts_obj = new TYPE; // Store the dynamically allocated pointer in TSS. ACE_OS::thr_setspecific (key_, ts_obj); } return ts_obj; } More issues to consider when implementing a C++ thread-specific storage proxy are shown in the Thread-Specific Storage pattern in [SSRB00]. ExampleThis example illustrates how ACE_TSS can be applied to our thread-per-connection logging server example from Section 9.2. In this implementation, each thread gets its own request count that resides in thread-specific storage. This design allows us to alleviate race conditions on the request count without requiring a mutex. It also allows us to use thread-specific storage without incurring all the accidental complexity associated with the error prone and nonportable native C APIs. We start by defining a simple class that keeps track of request counts: class Request_Count { public: Request_Count (): count_ (0) {} void increment () { ++count ; } int value () const { return count_; } private: int count_; }; Note how Request_Count knows nothing about thread-specific storage.We then use this class as the type parameter to the ACE_TSS template, as follows : static ACE_TSS<Request_Count> request_count; Each thread now has a separate copy of Request_Count , which is accessed via request_count . Although this object appears to be "logically global," that is, it's accessed just like any other object at file scope, its state is "physically local" to the thread in which it's used. We show how simple it is to integrate request_count into the handle_data () method: virtual int handle_data (ACE_SOCK _Stream *) { while (logging_handler_. log_record () !_ -1) // Keep track of number of requests. request_count->increment (); ACE_DEBUG ((LM_DEBUG, "request_count = %d\n", request_count->value ())); } By using the ACE_TSS wrapper facade, we needn't lock the increment () and value () calls, which eliminates contention and race conditions. |
I l @ ve RuBoard |