Four Presuppositions


Four Presuppositions

To those already part of the object culture, the following statements are obvious (they go without saying) and obviously necessary as prerequisites to object thinking:

  • Everything is an object.

  • Simulation of a problem domain drives object discovery and definition.

  • Objects must be composable .

  • Distributed cooperation and communication must replace hierarchical centralized control as an organizational paradigm.

For those just joining the object culture, each of these key points will need to be developed and explained. For those opposed to the object culture, these same presuppositions will be major points of contention .

One: Everything is an object.

This assertion has two important aspects. One is essentially a claim to the effect that the object concept has a kind of primal status ”a single criterion against which everything else is measured. Another way of looking at this claim would be to think of an object as the equivalent of the quanta from which the universe is constructed . The implication of this aspect of everything-is-an-object suggests that any decomposition, however complicated the domain, will result in the identification of a relatively few kinds of objects and only objects.

There will be nothing left over that is not an object. For example:

  • Relationships , which are traditionally conceived as an association among objects that is modeled and implemented in a different way than an object ”would themselves become just another kind of object, with their own responsibilities and capabilities.

  • Data traditionally is seen as a kind of passive something fundamentally different from active and animated things such as procedures. Something as simple as the character D is an object ”not an element of data ”that exhibits behavior just as does any other object. Whatever manipulations and transformations are required of an object, even a character, are realized by that object itself instead of some other kind of thing (a procedure) acting upon that object. The commonsense notion of data is preserved because some objects have as their primary, but not exclusive, responsibility the representation to human observers of some bit of information.

  • Procedures as a separate kind of thing are also subsumed as ordinary objects. We can think of two kinds of procedure: a script that allows a group of objects to interact in a prescribed fashion and the vital force that actually animates the object and enables it to exhibit its behaviors. A script is nothing more than an organized collection of messages, and both the collection and the message are nothing more than ordinary objects. The vital force is nothing more than a flow of electrons through a set of circuits ”something that is arguably apart from the conceptual understanding of an object, just as the soul is deemed to be different from but essential to the animation of a human being.

Equating, even metaphorically, a procedure to a soul will strike most readers as a bit absurd, but there is a good reason for the dramatic overstatement. It sometimes takes a shock or an absurdity to provide a mental pause of sufficient length that a new idea can penetrate old thinking habits. This is especially true when it comes to thinking about programming, wherein the metaphysical reality of two distinct things ”data and procedures ”is so ingrained it is difficult to transcend. So difficult, in fact, that most of those attempting object development fail to recognize the degree to which they continue to apply old thinking in new contexts.

Take object programming, for example ”using Smalltalk as an example merely because it claims to be a pure object language. Tutorials from Digitalk s Smalltalk manuals illustrate how programmers perpetuate the notion that some things do and others are done to.

The code in Listing One ”Pascal is a Pascal program to count unique occurrences of letters in a string entered by a user via a simple dialog box. (Pascal was designed to teach and enforce that algorithms [active procedures] plus [passive] data structures = program mode of thinking.)

Listing Two ”Naive Smalltalk shows an equivalent Smalltalk program as it might be written by a novice still steeped in the algorithms plus data structures mode of programming. Both programs contain examples of explicit control and overt looping constructs. The Pascal program also has typed variables ”an implicit nod to the need for control over the potential corruption of passive data.

Listing One ”Pascal
start example
 programfrequency;  const  size80;  var  s:string[size]; i:integer;  c:character;  f:array[1..26]ofinteger;  k:integer; begin  writeln('enterline');  readln(s);  fori:=1to26dof[i]:=0;  fori:=1tosizedo  begin c:=asLowerCase(s[i]);  ifisLetter(c)then  begin k:=ord(c)ord('a')+1;  f[k]:=f[k]+1  end end;  fori:=1to26do  write(f[i],'')  end. 
end example
 

There is some evidence of object thinking in Listing Two ”mostly conventions or idioms enforced by the syntax of the Smalltalk language ”the use of the Prompter object, the control loops initiated by integer objects receiving messages, discovery of the size of the string being manipulated by asking the string for its size, and so forth.

Listing Two ”Naive Smalltalk
start example
 scfk f:=Arraynew:26. s:=Prompterprompt:'enterline'default:''. 1to:26do:[:ifat:iput:0]. 1to:ssizedo:[   :Ic:=(sat:i)asLowerCase. cisLetterifTrue:[  k:=casciiValue-$aasciiValue+1.           fat:kput:(fat:k)+1.  ].   ].  ^f 
end example
 

A programmer better versed in object thinking (and of course, the class library included in the Smalltalk programming environment) starts to utilize the innate abilities of objects, including data objects (the string entered by the user and character objects), resulting in a program significantly reduced in size and complexity, as illustrated in Listing Three ”Appropriate Smalltalk .

Listing Three ”Appropriate Smalltalk
start example
 sf s:=Prompterprompt:'enterline'default:''. f:=Bagnew. sdo:[:ccisLetterifTrue:[fadd:casLowerCase]]. ^f. 
end example
 

Types, as implied earlier, create a different kind of thing than an object. Types are similar to classes in one sense, but classes are also objects and types are not. This distinction is most evident when variables are created. If variables are typed, they are no longer just a place where an object resides. Typing a variable is a nonobject way to prevent all but a certain kind of object from taking up residence in a named location. Many people have advanced arguments in favor of typing, but none of those arguments directly challenges the everything-is-an-object premise . The arguments in favor of types are orthogonal to the arguments in favor of treating everything as an object. (See note.)

Note  

Programs are written by human beings, and human beings make mistakes. One response to this truism is to assume that the quantity of mistakes is both high and essentially constant, which mandates the existence and use of error detection and prevention mechanisms ”such as typing. Mechanisms, such as typing, are always constrictive, so much so that every typed language I know of allows ways to escape the confines of strict typing ”casting, for example ”which reintroduce the potential for the errors that typing was intended to prevent. An alternative response, one consistent with the ideas and ideals of object thinking, is to reduce the programmer s proclivity for making errors by teaching the programmer the precepts of simplicity and testing. Most of the time, data moves about a program with little chance of error arising from the wrong kind of data being in the wrong place at the wrong time. (User input is the obvious major exception.) If your thinking about objects and object communication reveals the potential for a type error, you should create a test for such errors and include an explicit check in your code at that point. (Element of data occupying variable X , what class are you an instance of?) Since everything is an object, your element of data is an object quite capable of telling you its class. You can have all the benefits of typing without the constraints and the complications arising from escape valves such as casting.

The everything-is-an-object principle applies to the world, the problem domain, just as it applies to design and programming. David Taylor [2] and Ivar Jacobson [3] use objects as an appropriate design element for engineering, or reengineering, businesses and organizations. (See the sidebar, David A. Taylor and Convergent Engineering.)

start sidebar
David A. Taylor and Convergent Engineering

Traditional modeling of businesses and organizations is flawed according to Taylor because of the lack of consistency among the set of models utilized. For example, neither a financial model nor a data model captures the cost of a bit of information, and inconsistency in design philosophy prevents the two models from collectively revealing such costs ”they cannot be coordinated.

In his book Business Engineering with Object Technology (John Wiley and Sons, 1995), Taylor suggests creating a single object model incorporating everything necessary to produce traditional financial, simulation, process, data, and workflow models as views of the unifying object model. His process for accomplishing this goal is convergent engineering , and it, in turn , is based on a behavioral, CRC (Class, Responsibility, Collaborator) card, approach to object discovery and specification.

In addition to describing how to conceptualize objects and classes, Taylor describes a process for discovery and specification leading to the creation of the organizational object model. That model identifies all the objects in an organization and how they interact ”not just the ones that will eventually be implemented as software. He also provides a framework for business objects that illustrates the power of object thinking in generating simple but powerful objects. His framework defines four classes ( Business Elements , Organizations , Processes , and Resources ), describes the behaviors of each, how those behaviors contribute to the generation of the five standard types of business model, how they can be customized, and how their interoperation can be optimized to reengineer the organization as a whole.

end sidebar
 

The programming example shown earlier illustrates one dimension of treating everything as an object. Applying the everything-is-an-object principle to the world ”finding and specifying objects that are not going to be implemented in program code or software ”can be illustrated by considering a Human object. Objects, as we will discuss in detail later, are defined in terms of their behaviors. A behavior can be thought of as a service to be provided to other objects upon request.

What services do humans provide other objects? For many, this is a surprising question because human beings are not implemented by developers and are therefore considered outside the scope of the system. But it is a fair question and should result in a list of responsibilities similar to the following:

  • Provides information

  • Indicates a decision

  • Provides confirmation

  • Makes a selection

The utility of having a Human object becomes evident in the simplification of interface designs. Acknowledging the existence of Human objects allows the user interface to reflect the needs of software objects for services from Human objects. This simple change of perspective ”arising from application of the everything-is-an-object principle ”can simplify the design of other objects typically used in user-interface construction.

Additional implications of the everything-is-an-object premise will be seen throughout the remainder of this book.

Two: Simulation of a problem domain drives object discovery and definition.

Decomposition ”breaking a large thing up into smaller, more easily understood things ”is necessary before we can solve most of the problems we encounter as software developers. There are different approaches to decomposition. For example, find the data and data structures, find the processing steps, and find the objects. In object thinking, the key to finding the objects is simulation. The advocacy of simulation for object discovery has four primary roots:

  • The system description language philosophy behind SIMULA, as discussed in the preceding chapter.

  • Alan Kay s ideas about user illusions and objects as reflections of expectations based on an understanding of how objects behave in a domain.

  • David Parnas s arguments in favor of a design decision hiding approach to decomposition ”partitioning the problem space and not the solution space as did functional decomposition approaches ”as discussed in the preceding chapter.

  • Christopher Alexander s [4] ideas about design as the resolution of forces in a problem space and his subsequent work on patterns that underlie the organization of a problem space and provide insights into good design. These will be elaborated later in this book in the discussion of patterns and pattern languages as an aspect of object thinking.

Proper decomposition has been seen as the critical factor in design from very early times. This quotation from Plato (which you might recall from Chapter 1) is illustrative .

[First,] perceiving and bringing together under one Idea the scattered particulars, so that one makes clear the thing which he wishes to do... [Second,] the separation of the Idea into classes, by dividing it where the natural joints are, and not trying to break any part, after the manner of a bad carver... I love these processes of division and bringing together, and if I think any other man is able to see things that can naturally be collected into one and divided into many, him I will follow as if he were a god.

Plato suggests three things: decomposition is hard (and anyone really good at it deserves adoration), any decomposition that does not lead to the discovery of things that can be recombined ” composed ”is counterproductive, and the separation of one thing into two should occur at natural joints. By implication, if you decompose along natural joints ”and only if you do so ”you end up with objects that can be recombined into other structures. Also by implication, the natural joints occur in the domain, and bad carving results if you attempt to use the wrong knife ”the wrong decomposition criterion.

If you have the right knife and are skilled in its use ”know how to think about objects and about decomposition ”you will complete your decomposition tasks in a manner analogous to that of the Taoist butcher:

The Taoist butcher used but a single knife, without the need to sharpen it, during his entire career of many years . When asked how he accomplished this feat, he paused , then answered , I simply cut where the meat isn t.

According to this traditional story, even meat has natural disjunctions that can be discerned by the trained eye. Of course, a Taoist butcher is like the Zen master who can slice a moving fly in half with a judicious and elegant flick of a long sword. Attaining great skill at decomposition will require training and good thinking habits. It will also require the correct knife.

Decomposition is accomplished by applying abstraction ”the knife used to carve our domain into discrete objects. Abstraction requires selecting and focusing on a particular aspect of a complex thing. Variations in that aspect are then used as the criteria for differentiation of one thing from another. Traditional computer scientists and software engineers have used data (attributes) or functions (algorithms) to decompose complex domains into modules that could be combined to create software applications. This parallels Edsger Wybe Dijkstra s notion that a computer program equals data structures plus algorithms.

start sidebar
Behind the Quotes ”Edsger Wybe Dijkstra

Professor Edsger Wybe Dijkstra, a noted pioneer of the science and industry of computing, died in August 2002 at his home in the Netherlands.

Dijkstra was the 1972 recipient of the ACM Turing Award. (Some consider this award the Nobel Prize for computing.) He was a member of the Netherlands Royal Academy of Arts and Sciences and a Distinguished Fellow of the British Computer Society. He received the 1989 ACM SIGCSE Award for Outstanding Contributions to Computer Science Education. The C&C Foundation of Japan recognized Dijkstra for his pioneering contributions to the establishment of the scientific basis for computer software through creative research in basic software theory, algorithm theory, structured programming, and semaphores. He is credited with the idea of building operating systems as explicitly synchronized sequential processes and for devising an amazingly efficient shortest- path algorithm. He designed and coded the first Algol 60 compiler.

Dijkstra is one of the best examples of the formalist position in computer science. He believed and argued in favor of the position that mathematical logic must be the basis for sensible computer program construction. He added the term structured programming to the language of our profession and led the fight against unconstrained GO TO statements in program code.

Some other common computer science concepts and vocabulary credited to Dijkstra include separation of concerns (which is important to object thinking), synchronization, deadly embrace, dining philosophers , weakest precondition, and the guarded command. He introduced the concept of semaphores as a means of coordinating multiprocessing. The Oxford English Dictionary credits him for introducing the words vector and stack into the computing context.

end sidebar
 

The fact that a computer program consists of data and functions does not mean that the nonsoftware world is so composed. Using either data or function as our abstraction knife is exactly the imposition of artificial criteria on the real world ”with the predictable result of bad carving. The use of neither data nor function as your decomposition abstraction leads to the discovery of natural joints. David Parnas pointed this out in his famous paper On Decomposition. Parnas, like Plato, suggests that you should decompose a complex thing along naturally occurring lines, what Parnas calls design decisions.

Both data and function are poor choices for being a decomposition tool. Parnas provided several reasons for rejecting function. Among them are the following:

  • Resulting program code would be complicated, far more so than necessary or desirable.

  • Complex code is difficult to understand and test.

  • Resulting code would be brittle and hard to modify when requirements changed.

  • Resulting modules would lack composability ”they would not be reusable outside the context in which they were conceived and designed.

Parnas s predictions have consistently been demonstrated as the industry blithely ignored his advice and used functional decomposition as the primary tool in program and system design for 30 years (40 if you recognize that most object development also uses functionality as an implicit decomposition criterion). Using data as the decomposition abstraction leads to a different set of problems. Primary among these is complexity arising from the explosion in total data entities required to model a given domain and the immense costs incurred when the data model requires modification.

Note  

Some examples of the kind of explosion referred to in the preceding paragraph are from my own consulting practice. One organization designed a customer support system that identified 15 different customer classes because they were using a data-oriented approach and had to create new classes when one type of customer did not share attributes of the other types. In a much larger example, a company had just completed a corporate data model (costing millions of dollars) when they decided to build a very large object system. They mandated the use of the data model for identifying objects, resulting in a class library of more than 5000 classes. This became the foundation of their system, causing enormous implementation problems. A final, midrange , example was a database application for billing and invoicing wherein management demanded 1:1 replication of the existing system. This was accomplished, but it took more than a year with an offshore development team of 10 or 15 developers. My colleague and I duplicated the capabilities of the system, using object thinking, in a weekend . Management, however, was not impressed.

What criterion should be used instead of data or functions? Behavior!

Coad and Yourdon [5] claimed that people have natural modes of thought. Citing the Encyclopedia Britannica , they talk about three pervasive human methods of organization that guide their understanding of the phenomenological world: differentiation, classification, and composition. Taking advantage of those natural ways of thinking should, according to them, lead to better decomposition.

start sidebar
Behind the Quotes ”Ed Yourdon and Peter Coad

Edward Yourdon is almost ubiquitous in the world of software development ”publishing, consulting, and lecturing for decades on topics ranging from structure development to various kinds of crises (for example, the demise of the American programmer and Y2K).

The foundation for his reputation arose from his popularization of structured approaches to analysis and design. His textbook on structured analysis and design was a standard text through several editions. In 1991, he published two books, a new edition of Structured Analysis and Design and a small book, coauthored with Peter Coad, called Object Oriented Analysis .

In the object book, Yourdon made a surprising admission: the multiple model approach ”data in the form of an entity relation diagram, process flow in the form of a data flow diagram, and implementation in the form of a program structure chart ”advocated in his structured development writings (including the one simultaneously published) never, in his entire professional career, worked! In practice, it was impossible to reconcile the conceptual differences incorporated into each type of model.

Objects, he believed, would provide the means for integrating the multiple models of structured development into one. Unfortunately, he chose data as the knife to be used for object decomposition. Other ideas advanced in that book proved to be more useful for understanding objects and object thinking ”especially the discussion of natural modes of thought.

Peter Coad parted ways with Yourdon after the publication of this book and developed a method and an approach to object modeling and development that was far more behavioral in its orientation. He has several books on object development that are worthy of a place in every object professional s library.

end sidebar
 

Classification is the process of finding similarities in a number of things and creating a label to represent the group. This provides a communication and thinking shortcut, avoiding the need to constantly enumerate the individual things and simply speak or think of the group. Six different tubular, yellow, and edible things become bananas, while five globular, red, edible things become apples. The process of classification can continue as we note that both apples and bananas have a degree of commonality that allows us to lump them into an aggregate called fruit. In continuing the process of classification, we create a taxonomy that can eventually encompass nearly everything ”the Linnaean taxonomy of living things (and its more sophisticated DNA-based successors) being one commonly known example.

Composition is simply the recognition that some complicated things consist of simpler things. Ideally, both the complicated things and the simple things they are composed of have been identified and classified . Grady Booch suggested that all systems have a canonical form. His book Object Oriented Design includes a diagram captioned, Canonical Form of Complex Systems, which captures both classification and composition hierarchies and the relationship that should exist between the two.

Classification requires differentiation, some grounds for deciding that one thing is different from another. The differentiation grounds should reflect natural ways of thought, as do classification and composition. So how do we differentiate things in the natural world?

Consider a tabby and a tiger. What differentiates a tiger from a tabby? Why do we have separate names for them? Because one is likely to do us harm if given the chance, and the other provides companionship (albeit somewhat fickle). Each has at least one expected behavior that differentiates it from the other. It is this behavior that causes us to make the distinction.

Some (people who still believe in data, for example) would argue that tabbies and tigers are differentiated because they have different attributes. But this is not really the case. Both have eye color , number of feet, tail length, body markings , and so on. The values of those attributes are quite different ”especially length of claw and body weight ”but the attribute set remains relatively constant.

Behavior is the key to finding the natural joints in the real world. This means, fortunately, that most of our work has already been done for us. Software developers simply must listen to domain experts. If the domain expert has at hand a name (noun) for something, there is a good chance that that something is a viable , naturally carved, object.

Note  

Listening to the domain expert and jotting down nouns captures the essence of finding a natural decomposition. A full and shared domain understanding requires the negotiation of a domain language , a term used by Eric Evans in his forthcoming book Domain-Driven Design: Tackling Complexity in the Heart of Software .

Using behavior (instead of data or function) as our decomposition criterion mandates the deferral of much of what we know about writing software and almost everything we learned to become experts in traditional (structured) analysis and design. That knowledge will be useful eventually, but at the outset it is at best a distraction from what we need to accomplish. We must relearn how to look at a domain of interest from the perspective of a denizen (user) of that domain. We need to discover what objects she sees, how she perceives them, what she expects of them, and how she expects to interact with them. Only when we are confident that our understanding of the domain and of its decomposition into objects mirrors that of the user and the natural structure of that domain should we begin to worry about how we are going to employ that understanding to create software artifacts. (Our understanding may come one story at a time,   la XP.)

Note  

The focus of decomposition is understanding the domain as it is. Developers and domain experts should always be aware that what is is not necessarily what is best. Just because an object exists in the domain in a particular form and has specific expectations associated with it doesn t mean that the object should and must continue to exist in that form. Domains are subject to redesign, as are the objects and the relationships and communications among objects in that domain. As developers and domain experts work together, it s quite possible that they will define new objects and redesign existing objects. This is not only acceptable but highly desirable ”as long as the basis for redesign activities remains the domain, not implementation environments.

Three: Objects must be composable.

As Plato noted, putting things together again is just as important as taking them apart. In fact, it is the measure of how well you took them apart. Any child with a screwdriver and a hammer can take things apart. Unless another child can look at the pieces and determine how to put them together again (or even more important, see how to take a piece from one pile and use it to replace a piece missing from another pile), the first child s decomposition was flawed.

Composability incorporates the notions of both reusability and flexibility and therefore implies that a number of requirements must be met:

  • The purpose and capabilities of the object are clearly stated (from the perspective of the domain and potential users of the object), and my decision as to whether the object will suit my purposes should be based entirely on that statement.

  • Language common to the domain (accounting, inventory, machine control, and so on) will be used to describe the object s capabilities.

  • The capabilities of an object do not vary as a function of the context in which it is used. Objects are defined at the level of the domain. This does not eliminate the possible need for objects that are specialized to a given context; it merely restricts redefinition of that same object when it is moved to a different context. Objects that are useful in only one context will necessarily be created but should be labeled appropriately.

  • When taxonomies of objects are created, it is assumed that objects lower in the taxonomy are specialized extensions of those above them. Specialization by extension means that objects lower in the taxonomy can be substituted for those above them in the same line of descent. Specialization by constraint ( overrides ) might sometimes be required but almost inevitably results in a bad object because it is now impossible to tell whether that object is useful without looking beyond what it says it can do to an investigation of how it does what it says it can do. [5]

Although relatively simple to state, these requirements are difficult to satisfy . The general principle guiding the creation of composable objects is to discover and generalize the expected behavior of an object before giving any consideration to what lies behind that behavior. This is a concept that has been a truism in computer science almost from its inception. The most pragmatic consequence of this principle is the need to defer detailed design (coding) until we have a sure and complete grasp of the identification and expected behaviors of our objects (the objects relevant to the story we are currently working on) in the domain where they live .

start sidebar
Forward Thinking ”A Problem of Reuse

In Forward Thinking: Metaphor and Initial Stories, which appeared in Chapter 2, it was noted that two stories dealt with dispensing (change and product) and might involve the same objects. Further discussion of dispensing revealed three variations of a story involving some kind of dispense action: dispense a measured volume of liquid, dispense a product, and dispense change due the customer.

It would be nice if we had a single class, Dispenser , that could be used in all three stories. This would mean that Dispenser would have to be a composable object, able to be reused in different contexts without modification of its essential nature.

Because of the work of the team of developers on three different stories involving dispensing, three versions of the Dispenser object have been created. The pairs of developers involved meet to look at each other s code and see whether they can refactor and redesign the Dispenser object to make it more composable ”more reusable.

In one case (product dispensing), the code for the dispense method looked like the following pseudocode:

 IFdispenserType= "Gate" Gateopen. Else Settimer=10. Openswitch. End-if Whentimer=<0closeswitch. 

The code in question reveals an awareness of two types of vending evident in the machines in the hall: opening a gate to drop a can of soda and pushing a product out of a coil.

The dispense method for the change dispenser looked like the following:

 WhileamountToBePaid>=SmallestDenominationAvailable AND AmountToBePaid>LargestDenominationAvailable LargestDenominationDispenserejectCoin AmountToBePaid=(AmountToBePaidlargestDenomination). 

Yes, I know you would never write code this ugly and that the second example will not really work, but code is not the issue here; refactoring is. After some discussion, the teams decided that the dispenser object was really just a fa §ade for some mechanism that did the actual work of dispensing: a valve that opened for a period of time, a motor that ran for a period of time, or a push bar that kicked an item out of the dispenser storage area. It was also decided that the quantity to be dispensed should be supplied to the dispenser rather than calculated by the dispenser. These decisions simplified the method dramatically. In all cases, the pseudocode would look something like this:

 For1toquantityToBeDispensed DispensingMechanismdispense. End-loop. 

The only other behaviors of Dispenser ” to disable itself when empty or when not functioning and to identify itself ”were already simple and common in all contexts. Dispenser was now defined in such a way as to be truly composable.

Was this accomplished only at the expense of moving some essential complexity to another object? No. Two other objects are probably involved in every dispensing operation: a collection object that contains the actual dispensers and relays dispense requests to the appropriate dispenser within the collection ”a trivial behavior already built into well-designed collection objects ”and a dispensingRule object, which is an instance of (not a subclass of) a SelfEvaluatingRule object. (See Forward Thinking: Communication and Rules, for more discussion of rules in the UVM.)

end sidebar
 

Four: Distributed cooperation and communication must replace hierarchical centralized control as an organizational paradigm.

Consider one of the more widely used models in traditional software development, the program structure chart (Figure 3-1), popularized by Meillor Page-Jones. [7] At the  top of the chart is the puppet master module, attended to by a court of special-purpose input, transform, and output modules. The puppet master incorporates all the knowledge about the task at hand, the capabilities of each subordinate module, and when and how to invoke their limited capabilities. The same thinking characterizes structured source code, wherein a main-line routine (frequently a Case statement) consolidates overall control. Each paragraph of a collection of special-purpose subroutine paragraphs is individually invoked and given limited authority to perform before control reverts to the main line.

click to expand
Figure 3-1: Program structure chart.

Unlike puppet modules, objects are autonomous. They are protected from undue interference and must be communicated with, politely, before they will perform their work. It is necessary to find a different means to coordinate the work of objects, one based on intelligent cooperation among them.

It is sometimes difficult to conceive how coordination among autonomous objects can be achieved without a master controller or coordinator . One simple example is the common traffic signal. Traffic signals coordinate the movement of vehicles and people but have no awareness of what those other objects are about or even if any of them actually exist. A traffic signal knows about its own state and about the passage of time and how to alter its state as a function of  elapsed time. In this model, the necessary control has been factored and distributed. The traffic signal controls itself and notifies (by broadcasting as a different color) others of the fact that it has changed state. Other objects, vehicles, notice this event and take whatever action they deem appropriate according to their own needs and self-knowledge .

Note  

But what about intersections with turn arrows that appear only when needed? Who is in control then? No one. Sensors are waiting to detect the I am here event from vehicles. The traffic signal is waiting for the sensor to detect that event and send it a message: Please add turn arrow state. It adds the state to its collection of states and proceeds as before. The sensor sent the message to the traffic signal only because the traffic signal had previously asked it to ”registered to be notified of the vehicle present event. Traffic management is a purely emergent phenomenon arising from the independent and autonomous actions of a collectivity of simple objects ”no controller needed. If you have a large collection of traffic signals and you want them to act in a coordinated fashion, will you need to introduce controllers? No. You might need to create additional objects capable of obtaining information that individual traffic signals can use to modify themselves (analogous to the sensor used to detect vehicles in a turn lane). You might want to use collection objects so that you can conveniently communicate with a group of signals. You might need to make a signal aware of its neighbors, expanding the individual capabilities of a traffic signal object. You will never need to introduce a controller.

Eliminating centralized control is one of the hardest lessons to be learned by object developers.

[2] Taylor, David. Business Engineering with Object Technology . John Wiley & Sons, 1995.

[3] Jacobson, Ivar. The Object Advantage: Business Process Reengineering with Object Technology . ACM Press. Reading, MA: Addison-Wesley. 1994.

[4] Alexander, Christopher. Notes on the Synthesis of Form . Harvard University Press, 1970.

[5] Yourdon, Edward, and Peter Coad. Object Oriented Analysis . Yourdon Press. Englewood Cliffs, NJ: Prentice Hall, 1990.

[5] The exception occurs when a method is declared high in the hierarchy with the explicit intent that all subclasses provide their own unique implementation of that method and when the details of how are idiosyncratic but irrelevant from the perspective of a user of that object.

[7] Page-Jones, Meillor. The Practical Guide to Structured Systems Design . Yourdon Press Computing Series. Englewood Cliffs, NJ: Prentice-Hall, Inc.1988.




Microsoft Object Thinking
Object Thinking (DV-Microsoft Professional)
ISBN: 0735619654
EAN: 2147483647
Year: 2004
Pages: 88
Authors: David West

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net