1.5 Types of Concurrent Programming

 < Day Day Up > 



1.5 Types of Concurrent Programming

Before continuing to describe component programming, it is necessary to clear up some misconceptions about concurrent programming. Programmers often believe that concurrent programming is something that involves just one type of problem. For example, some programmers believe that all concurrent processing involves speeding up very large simulations, such as simulations of weather or of seismic activity in the Earth's crust. Other programmers believe that concurrent programming addresses only problems that occur when an operating system is run on a computer. Still others believe that concurrent programming is required only in distributed systems. Because these programmers approach the problems of concurrency with preconceived biases as to the type of problem they want to solve, they do not understand the methodologies for concurrency that address problems other than the ones in which they are interested.

There is a very wide variety of reasons to use concurrency, and each of these reasons to implement concurrency in a program results in programs that are structured differently. There is really no one "best" way to implement concurrency and synchronization, as it really depends on the type of problem being solved. Below is a list of some of the reasons why concurrent programming might be used. Each type of concurrent program is accompanied by a description of how the type of problem to be solved affects the type of solution that is developed. This text is largely interested in using concurrent programming for soft real time, distributed, and modeling purposes. While the techniques used do apply to other systems, more appropriate solutions usually apply to those problems. Also, note that a program is seldom any one type of concurrent program; often it will exhibit characteristics of many of the program types:

  • Incidental concurrency. Incidental concurrency occurs when concurrency exists but the asynchronous activities do not interact with each other. An extreme example would be a stand-alone computer in Washington running Word and a stand-alone computer in San Francisco running Excel. Incidental concurrency also occurs on operating systems such as UNIX, where multiple users are using a single computer but each user's program does not interact with any other program. So, while concurrency exists and must be taken into account in the operating system, from the point of view of the user's program no concurrent behavior must be considered. Incidental concurrency is really not very interesting and is not considered further in this book.

  • Resource utilization. Resource utilization, which is often associated with operating systems, occurs when a program is built around shared resources. For example, concurrency was implemented in the first operating systems to keep the expensive CPU occupied doing useful work on one program while another performed Input/Output (IO). This same principal occurs in a PC where some parts of a program can be designed around special-purpose hardware, such as a graphics or IO processor, which is really a separate CPU and thus running asynchronously to the main processor. This type of concurrency is often handled by the compiler or the operating system and is normally transparent to the programmer. When doing this type of concurrent programming, the programmer writes the program around the special resources that are present and shared. This type of concurrent programming is normally covered in books on operating systems and are not considered further in this book.

  • Distributed programming. In a distributed program, not all of the resources required by a program exist on a single computer but instead reside somewhere on a network of computers. To take advantage of these distributed resources, programs are designed around locating and accessing the resources. This can involve special methods and protocols to find the resources, such as with RMI using rmiregistry, and even writing entire protocols, as with socket-level protocols.

  • Parallel computing. Parallel computing is used when a program requires a large amount of real (clock) time, such as weather prediction models. These models can be calculated more rapidly by using a number of processors to work simultaneously on the problem. Parallel programs are designed around finding sections of the program that could be efficiently calculated in parallel. This is often accomplished by using special compilers that can take language structures such as loops and organize them so that they can be run on separate processors in parallel. Some systems add extensions to languages to help the compiler make these decisions.

  • Reactive programming. Reactive programs are programs for which some part of the program reacts to an external stimulus generated in another program or process. The two types of reactive programs are hard real time and soft real time.

    • Hard real time. Hard real time programs are programs that must meet a specific timing requirement. For example, the computer on a rocket must be able to guarantee that course adjustments are made every 1/1000th of a second; otherwise, the rocket will veer off course. Hard real time programs are designed around meeting these timing constraints and are often designed using timing diagrams to ensure that events are processed in the allotted time. The programs are then implemented in low-level languages, such as Assembly or C, which allow for control every clock cycle used by the computer.

    • Soft real time. Soft real time programs process the information in real time, as opposed to a batch mode, where the information is updated once or twice a day. These programs use current data but do not meet hard deadlines. One example is a Web-based ordering system that always has the most recent data but could take several seconds to provide it to the client. These systems are often designed around the services they provide, where the services are sometimes implemented as transactions. Objects that are components often process these transactions.

  • Availability. For some programs such as E-commerce Web sites it is important that they are accessible 24 hours a day, 7 days a week. Concurrency can be used to replicate the critical parts of the program and run them on multiple independent computers. This guarantees that the program will continue to be available even if one of the processors fails. These programs are designed so that critical pieces can be replicated and distributed to multiple processors. These systems are often soft real time programs with special capabilities to ensure their availability; thus, they use components in their design.

  • Ease of implementation. Using concurrent programming can make it easier to implement a program. This is true of most GUI programs, where concurrency with components makes it easier to implement buttons, TextFields, etc. Many of the objects used in these systems are designed as components.

  • System modeling. Sometimes concurrent programming is used because it better supports the abstract model of the system. These programs are often simulation programs modeled using objects, where some of the objects are active and some are passive. These programs are designed around making the abstract program model as close to the real-world problem as possible. Many of the objects that are modeled in these systems are components.



 < Day Day Up > 



Creating Components. Object Oriented, Concurrent, and Distributed Computing in Java
The .NET Developers Guide to Directory Services Programming
ISBN: 849314992
EAN: 2147483647
Year: 2003
Pages: 162

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net