8.4 Concurrency and Resource Design


Real-time systems typically have multiple threads of control executing simultaneously. A thread can be defined as a set of actions that execute sequentially, independent from the execution of action in other threads. Actions are statements executing at the same priority in a particular sequence or perform some cohesive function. These statements can belong to many different objects. The entirety of a thread is also known as a task. Multiple objects typically participate within a single task. Commonly, a distinction is made between heavyweight and lightweight threads. Heavyweight threads use different data address spaces and must resort to expensive messaging to communicate data among themselves. Such threads have relatively strong encapsulation and protection from other threads. Lightweight threads coexist within an enclosing data address space. Lightweight threads provide faster inter-task communication via this shared global space, but offer weaker encapsulation. Some authors use the terms thread or task to refer to lightweight threads and process to refer to heavyweight threads. We use thread, task, and process as synonyms in this book, with the understanding that if these distinctions are important, the «active» object would be more specifically stereotyped as «process», «task», or «thread».

8.4.1 Representing Threads

The UML can show concurrency models in a several ways. The primary way is to stereotype classes as «active»; other ways include orthogonal regions (and-states) in statecharts, forks and joins in activity diagrams, and the par operator in UML 2.0 sequence diagrams.

Class and object diagrams can use the stereotype «active» or the active object stereotype icon to represent threads. By including only classes and objects with this stereotype, we can clearly show the task structure. A task diagram is nothing more than a class diagram showing only active objects, the classes and objects associated with concurrency management such as semaphores and queues, and the relations among these classes and objects.

8.4.2 System Task Diagram

Class and object models are fundamentally concurrent. Objects are themselves inherently concurrent and it is conceivable that each object could execute in its own thread.[13] During the course of architectural design, the objects are aligned into a smaller set of concurrent threads solely for efficiency reasons. Thus the partitioning of a system into threads is always a design decision.

[13] This is, after all, how biological neural systems work. Neural structures are massively parallel systems that operate independently but collaborate by sending molecular messages (in the form of neurotransmitters) across synapses (the neural analog of interfaces). See [6].

In UML, each thread is rooted in a single active object. The active object is a structured class that aggregates the objects participating within the thread. It has the general responsibility to coordinate internal execution by the dispatching of messages to its constituent parts and providing information to the underlying operating system so that the latter can schedule the thread. By only showing the classes with the «active» stereotype on a single diagram, you can create a system task diagram.

The appropriate packaging of objects into nodes and threads is vital for system performance. The relationships among the threads are fundamental architectural decisions that have great impact on the performance and hardware requirements of the system. Besides just identifying the threads and their relationships to other threads, the characteristics of the messages must themselves be defined. These characteristics include

  • Message arrival patterns and frequencies

  • Event response deadlines

  • Synchronization protocols for inter-task communication

  • "Hardness" of deadlines

Answering these questions is at the very heart of multithreaded systems design.

The greatest advantage of a task diagram is that the entire set of threads for the system can be shown on a single diagram, albeit at a high conceptual level. It is easy to trace back from the diagram into the requirements specification and vice versa. Elaborating each thread symbol on the task diagram into either a lightweight task diagram or an object diagram means that the threads can be efficiently decomposed and related to the class, object, and behavioral models.

Figure 8-19 shows a task diagram for an elevator model; the primitive objects the ones that do the actual management of the elevator system are subsumed within the shown «active» classes. The diagram shows a number of useful things. First, notice that the structured classes for the various subsystems (Floor, Elevator, Shaft, Central Station, and Gnome) contain the task threads, and the task threads will internally contain the primitive objects. The tasks are shown with the heavy border. Some of the tasks show that they associate with semaphores and queues, which can be identified with the icons (or could be identified with textual stereotypes).

Figure 8-19. Elevator Task Diagram

graphics/08fig19.gif

Within each processor, objects are busy collaborating to achieve the goals of that subsystem. However, on the system task diagram, only the threads and concurrency management classes are shown. Remember that each thread is rooted in a single «active» composite object that receives the events for that thread and dispatches them to the appropriate object within the thread.

The associations among the threads are shown using conventional association notation. These associations indicate that the threads must communicate in some fashion to pass messages.

8.4.3 Concurrent State Diagrams

Rumbaugh [2] has suggested a means by which concurrent threads can be diagrammed using the statecharts. He notes that concurrency with objects generally arises by aggregation; that is, a composite object is composed of component objects, some of which may execute in separate threads. In this case, a single state of the composite object may be composed multiple states of these components.

«active» objects respond to events and dispatch them to their aggregate parts. This process can be modeled as a finite state machine. The other orthogonal component is due to the thread itself having a number of states. Since the active object represents the thread characteristics to the system, it is very natural to make this an orthogonal component of the active object.

Figure 8-20 shows the two orthogonal components of a typical «active» object class. The dashed line separates the orthogonal components of the running superstate. Each transition in the event processing component can only take place while the «active» object is in one of the substates of the running superstate of the thread component. After all, that is the only time it actual consumes CPU cycles. On the other hand, if the running thread becomes preempted or suspended, the event processing component will resume where it left off, as indicated by the history connector.

Figure 8-20. Concurrency in Active Objects

graphics/08fig20.gif

Table 8-4 provides a brief description of the states.

Table 8-4. States of the Active Object Thread Component

State

Description

Inactive

Thread is not yet created.

Waiting

Thread is not ready to run, but is waiting for some event to put it in the ready state.

Ready

Thread is ready to run and is waiting to execute. It is normally stored in a priority FIFO queue.

Running

Thread is running and chewing up CPU cycles. This superstate contains two orthogonal, concurrent components.

Interruptible

The thread is running and may be preempted. This is substate of the interruptibility component of the running state.

Atomic

The thread is running but may not be preempted. Specifically, task switching has been disabled. This is substate of the interruptibility component of the running state.

Blocked

Thread is waiting for a required resource to become available so that it may continue its processing.

Waiting for event

The thread is waiting for an event to handle. This is substate of the Event Handling state component of the running state.

Dispatching Event

The object is handling an incoming event and deciding which aggregates should process it. This is substate of the Event Handling state component of the running state.

Processing Event

The designated aggregate of the active object composite is responding to the event. This is substate of the Event Handling state component of the running state.

8.4.4 Defining Threads

During analysis, classes and objects were identified and characterized and their associations defined. In a multitasking system, the objects must be placed into threads for actual execution. This process of task thread definition is two-fold:

  1. Identify the threads.

  2. Populate the threads with classes and objects from the analysis and design process.

There are a number of strategies that can help you define the threads based on the external events and the system context. They fall into the general approach of grouping events in the system so that a thread handles one or more events and each event is handled by a single thread.

There are conditions under which an event may be handled by more than one thread. One event may generate other propagated events, which may be handled by other threads. For example, the appearance of waveform data may itself generate an event to signal another thread to scale the incoming data asynchronously. Occasionally, events may be multicast to more than one thread. This may happen when a number of threads are waiting on a shared resource or are waiting for a common event that permits them all to move forward independently.

8.4.5 Identifying Threads

Internal and external events can be grouped in a variety of ways into threads. The following are some common event grouping strategies.

  • Single Event Groups: In a simple system, it may be possible to create a separate thread for each external and internal event. This is usually not feasible in complex systems with dozens or even hundreds of possible events or when thread switch time is significant relative to the event response timing.

  • Sequential Processing: When it is clear that a series of steps must be performed in a sequential fashion, they may be grouped within a single thread.

  • Event Source: This strategy groups events from a common source. For example, all the events related to ECG numerics may be grouped into one thread (such as HR Available, ECG Alarms, etc.), all the noninvasive blood pressure (NIBP) data in another, the ventilator data in another, the anesthetic agent in another, and the gas mixing data in yet another. In an automobile, sources of events might be the ignition, braking, and engine control systems. In systems with clearly defined subsystems producing events that have roughly the same period, this may be the simplest approach.

  • Interface Device (Port): This grouping strategy encapsulates control of a specific interface within a single thread. For example, the (periodic) SDLC data can be handled in one thread, the (episodic) RS232 data to the external models by another, and the (episodic) user buttons and knobs by another. This strategy is a specialization of the event source grouping strategy.

  • Related Information: Consider grouping all waveforms to be handled by a single thread, and all measured numeric parameters within another thread. Or all information related to airfoil control surfaces in each wing and tail section might be manipulated by separate threads. This grouping may be appropriate when related data is used together in the user problem domain. Another name for this grouping is functional cohesion.

  • Arrival Pattern: If data arrives at a given rate, a single periodic thread could handle receiving all the relevant data and dispatching it to different objects as necessary. Aperiodic events might be handled by a single interrupt handler and similarly dispatch control to appropriate objects. Generally, this grouping may be most useful with internal events, such as timer interrupts, or when the periods of events naturally cluster around a small set of periods. Note that this is the primary strategy for identifying threads that have deadlines use of other policies with time-constrained event responses can lead to priority inversion unless the designers are especially careful.

  • Target Object/Computationally Intense Processing: One of the purposes of rendezvous objects is to encapsulate and provide access to data. As such, they are targets for events, both to insert and remove data. A waveform queue object server might have its own thread for background scaling and manipulation, while at the same time participating in threads depositing data within the queue object and removing data for display.

  • Purpose: Alarms serve one purpose to notify the system user of anomalies, so that he or she can take corrective action or vacate the premises, whichever seems more appropriate. This might form one event group. Safety checks within a watchdog thread, such as checking for stack overflow or code corruption, might form another. This purpose might map well to a use case.

  • Safety Concerns: The system hazard analysis may suggest threads. One common rule in safety-critical systems is to separate monitoring from actuation. In terms of thread identification, this means that a thread that controls a safety-relevant process should be checked by an independent thread. From a safety perspective, it is preferable to run safety checks on a separate processor, so that common-mode hardware and software faults do not affect both the primary and the safety processing simultaneously.

During concurrency design, you must add events to groups where appropriate so that each event is represented in at least one group. Any events remaining after the initial grouping can each be considered independently. As mentioned earlier, it is recommended that thread actions with hard deadlines use the arrival-pattern strategy to ensure a schedulable set of threads. Create a task diagram in which the processing of each group is represented by a separate thread. Most events will only occur within a single thread, but sometimes events must be dispatched to multiple threads.

Frequently, one or more of these groupings will emerge as the primary decomposition strategy of the event space, but it is also common to mix grouping strategies. When the grouping seems complete and stable, you have identified an initial set of threads that handle all events in your system. As the product development evolves, events may be added to or removed from groups, new groups may suggest themselves, or alternative grouping strategies may present themselves. This will lead the astute designer to alternative designs worth consideration.

8.4.6 Assigning Objects to Threads

Once you have identified a good set of threads, you may start populating the groups with objects. Note that I said "objects" and not "classes. "Objects are specific instances of classes that may appear in different threads or as an interface between threads. There are classes that only create a single instance in an application (singletons), and there are classes that instantiate to multiple objects residing within a single thread, but generally, classes instantiate a number of objects that may appear in any number of threads. For example, there may be queues of threads, queues of waveform data, queues of numeric data, queues of network messages, command queues, error queues, alarm queues, and so on. These might appear in a great many threads, even though they are instances of the same class (queue).

8.4.7 Defining Thread Rendezvous

So far, we have looked at what constitutes a thread, some strategies to select a set of threads, and how to populate threads with objects. The remainder of this chapter provides ways to define how the threads communicate with each other.

There are a number of strategies for inter-task communication. The simplest by far is to use the OS to send messages from one thread to another. While this approach maintains encapsulation and limits coupling among threads, it is expensive in terms of compute cycles and is relatively slow. Lightweight expeditious communication is required in many real-time systems in order for the threads to meet their performance requirements. In this chapter, we consider some methods for inter-task communication that are both lightweight and robust. [9] details the rendezvous pattern as a means of specifying arbitrarily complex rules for synchronizing tasks.

The two main reasons for thread communication are to share information and to synchronize control. The acquisition, manipulation, and display of information may occur in different threads with different periods, and may not even take place on the same processor, necessitating some means of sharing the information among these threads. Synchronization of control is also very common in real-time systems. In asynchronous threads that control physical processes, one thread's completion (such as emptying a chemical vat) may form a precondition for another process (such as adding a new volatile chemical to the vat). The thread synchronization strategy must ensure that such preconditions are satisfied.

When threads communicate, the rendezvous itself has attributes and behavior, which makes it reasonable to model it as an associative class. The important questions to ask about thread synchronization are these:

  • Are there any preconditions for the threads to communicate? A precondition is generally a data value that must be set, or some object must be in a particular state. If a precondition for thread synchronization exists, it should be checked by a guarding condition before the rendezvous is allowed to continue.

  • What should happen if the preconditions are not met, as when the collaborating thread is not available? The rendezvous can

    • Wait indefinitely until the other thread is ready (a waiting rendezvous)

    • Wait until either the required thread is ready or a specified period has elapsed (timed rendezvous)

    • Return immediately (balking rendezvous) and ignore the attempt at thread communication

    • Raise an exception and handle the thread communication failure as an error (protected rendezvous)

  • If data is to be shared via the rendezvous class, what is the relationship of the rendezvous object with the object containing the required information? Options include

    • The rendezvous object contains the information directly.

    • The rendezvous object holds a reference to the object containing the information, or a reference to an object serving as an interface for the information.

    • The rendezvous object can temporarily hold the information until it is passed to the target thread.

Remember that objects must ensure the integrity of their internal data. If the possibility exists that shared data can be simultaneously write or write-read accessed by more than a single thread, then it must be protected by some mechanism, such as a mutual-exclusion semaphore, as is done in Figure 8-19. In general, synchronization objects must handle

  • Preconditions

  • Access control

  • Data access

8.4.8 Sharing Resources

Rendezvous objects control access to resources and classical methods exist to handle resource usage in a multitasking environment. In the simplest case, resources can be simultaneously accessed that is, access is nonatomic. Many devices use predetermined configuration tables burned into FLASH or EPROM memory. Since processes can only read the configuration table, many threads can access the resource simultaneously without bad effects.

Data access that involves writing requires some form of access control to ensure data integrity. Clearly, if multiple internal attributes must be simultaneously updated, another reader thread cannot be permitted to read these values while only some of them are updated.

In large collections of objects, it may be necessary to allow read accesses in one or more portions of the database even while other sections are being updated. Large airline reservation databases must function in this fashion, for example. Algorithms to control these processes are well defined and available in texts on relational and object databases.

8.4.9 Assigning Priorities

Thread priority is distinct from the importance of the actions executed by the thread. Priority in a preemptive priority scheme determines the required timeliness of the response to the event or precondition. For example, in an ECG monitor, waveform threads must have a high priority to ensure that they run often enough to avoid a jerky appearance. ECG waveforms have tight timeliness requirements. On the other hand, a jerky waveform is not as important to patient outcome as sounding an alarm when the patient is at risk. An asystole alarm is activated when the monitor detects that the heart is no longer beating. Clearly, bringing this to the attention of the physician is very important, but if the alarm took an extra second to be annunciated, it would not affect patient outcome. Such an alarm is very important, but does not have a very high urgency, as compared with some other events.

In rate monotonic scheduling (RMS), the assignment of priorities is simple: The priority of all threads is inversely proportional to their periods. The shorter the period, the higher the priority. The original RMS scheme assumed that the deadline is equal to the period. When this is not true, the priority should be assigned based on the deadline, rather than the period. In general, the RMS scheduling makes intuitive sense threads with short deadlines must be dealt with more promptly than those with longer deadlines. It is not uncommon to find a few exceptions to the rule, however. RMS scheduling and the associated mathematics of proving schedulability are beyond the scope of this book. For a more detailed look, see [7,8].



Real Time UML. Advances in The UML for Real-Time Systems
Real Time UML: Advances in the UML for Real-Time Systems (3rd Edition)
ISBN: 0321160762
EAN: 2147483647
Year: 2003
Pages: 127

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net