Part1.Languages and Foundations


The ultimate goal of software engineering is to create running software systems, usually for a von Neumann architecture of instructions, registers, and memory locations. Formally, almost all programming languages are computationally equivalentalmost any programming language can be used to create Turing-machine-equivalent systems and control von Neumann machines. Why not just program in assembler or FORTRAN? Because programming is a human activity. Alternative programming models and languages facilitate building systems with better "-ilities:" evolvability, adaptability, reusability, efficiency, ease of implementation, and so forth.

We have seen a historical progression of language structures for taming the von Neumann beast: imperative, functional, logical, object-oriented. The earliest programming languages such as Algol and FORTRAN were imperative, mapping fairly straightforwardly to a von Neumann architecture: the abstractions of an imperative program (procedures, statements, record structures) have a fairly direct correspondence to von Neumann instructions and addressable memory locations. This provided efficient execution and straightforward compiler implementation. This machine-oriented approach to programming was not immediately perceived to be a problem, since software systems of that time were mainly applied to solving well-structured and relatively simple problems. We lacked the processing power to undertake harder problems or to apply more complex programming notations.

Computers got faster and larger, encouraging us to tackle more complicated problems. Programming language designers began to search for means to express problems and their solutions without being so closely tied to the details of the underlying machine. Some languages, such as functional and logic-based programming, have used mechanisms that break away from the imperative model. While these are good for certain classes of problems, they have failed to capture much mindshare. Most programmers are comfortable with languages that are primarily sequences of stateful and imperative commands.

Progress in imperative languages was marked by inventions that moved away from a simple map of the von Neumann architecture, such as subprograms, abstract data types, and modules. Although these abstractions provided modularizations and thereby reduced the complexity of programs, the organizational influence of von Neumann architecture still dominated. A data abstraction is roughly equivalent to a memory location accessed through a (privileged) set of user-defined instructions.

Within the context of imperative languages, introducing the concept of inheritance (and later delegation) was probably the most significant deviation from the von Neumann architecture. Object-oriented systems trace their roots to Simula[4], Ole-Johan Dahl and Kristen Nygaard's language for simulating complex systems, such as the vehicles, roads, and bridges of traffic flow. Simula enhanced Algol with the concepts of classes and class inheritance. In the following years, many language designers and programmers adopted this approach and contributed to the vast expansion of the application of object-oriented languages.

TRADITIONAL ENGINEERING

Perhaps the greatest effect of the progression to object orientation has been to shift the focus of programmers and designers from a machine-oriented view to a problem-centric view. This naturally led to adopting problem solving techniques in software engineering. In traditional engineering disciplines, design is a problem solving process. It aims to map problems to realizable solutions, expressed using the artifacts in the corresponding engineering domain. Problem solving consists of analyzing and dividing the problems into sub problems, solving each (sub) problem by applying the relevant knowledge in the engineering domain, and synthesizing (composing) the solutions into an integrated working system.

Traditional engineering disciplines have introduced the concepts of canonical component models, systemic (system-wide) models, system construction rules, and multiple (simultaneous) models. A canonical component model (such as a transistor model) defines the common properties of a set of its instances (such as transistors). Good canonical component models are succinct: they do not include redundant or irrelevant abstractions. A canonical model defines the essential operations on the model. This "algebraic" approach makes component models highly predictable and usable, especially in constructing more complex systems. However, such composition can introduce unwanted redundancy. In addition, it may also be necessary to identify a new set of essential operations over the constructed system. Traditional engineering disciplines address this problem by mapping the individual component models to a well-defined common model. For example, electronic engineers define models for the individual components (for example, transistors and resistors) and map these models into a common model (for example, circuits) that represents the systemic properties of the composed system. For electronic engineers, the common model is generally based on Kirschoff's law, and for mechanical engineers, Newton's laws. Preferably, common models should be canonical as well (for example, well-defined circuit models such as amplifiers).

Interestingly, in traditional engineering disciplines, the main focus of engineers is models that express systemic properties. For example, while designing an amplifier, electronic engineers define models that enable the computation of systemic properties such as the amplification factor, frequency bandwidth, and harmonic distortion. To carry out the desired computations, systemic models quantify (reason about) adopted components and/or other systemic models. In traditional engineering disciplines, designers generally have the precise knowledge of how systemic models quantify the related models. In fact, these quantifications define the basis of the system construction rules.

However, although traditional engineering disciplines generally use precise models, finding the optimal composition of solution models (synthesis) for solving a given problem can be difficult (NP-complete for a general case [7]). In that respect, software engineering is just like other engineering disciplines.

Another common practice of traditional engineering disciplines is to create multiple systemic models that represent different characteristics of the system. For example, within the context of designing an electronic amplifier, one model may be used for computing the frequency bandwidth, another model for computing the amplification factor, and so forth. Normally, a component participates in more than one systemic model. In principle, component models are independent of their containing systemic models. This obliviousness makes component models more reusable. Systemic models, on the other hand, qualify the component models.

Fulfilling the requirements of one model may have adverse effects on other models. For example, large bandwidth may compromise the amplification factor, low harmonic distortion may require decreasing amplification efficiency, and so forth. Within a given context, much of the skill of engineers is leveraging different solutions to obtain an optimal balance among quality factors.

SOFTWARE DEVELOPMENT IS ENGINEERING

Software development is an engineering activity, and as such, it progresses along the evolutionary path of traditional engineering disciplines [8]. The emerging phenomena of aspect-oriented software development techniques can be explained and motivated within this context. The concepts of component models, systemic models, system construction rules, and multiple models as defined in traditional engineering disciplines correspond respectively to the concepts of base-level models, aspect models, join point models, and multi-dimensional separation of concerns of aspect-oriented languages. Like traditional engineering methods, aspect-oriented languages seek stability by adopting canonical models. These models can be seen as a distinguishing characteristic of aspect-oriented languages.

Aspect-oriented languages can model the natural systemic properties of problem domains. This allows them to be a better representation than conventional object-oriented languages. Because of the conceptual divergence of aspect-oriented languages from the von Neumann machines, efficient implementation of aspect-oriented languages remains a challenging problem. Aspect-oriented languages therefore have to compromise within the context of the following two constraints [1]:

Abstractness constraint. The constructs of an aspect-oriented programming language must be abstract enough to match the natural abstractions of the problem domain. However, they also must be concrete enough to match the realization of the implementation platform. This constraint aims to minimize the implementation effort and enable efficiency.

Standardization constraint. The realization of an implementation platform must be standardized to ease sharing among multiple languages but must be differentiated enough to match the individual needs of each target language. This constraint aims to reduce costs through sharing implementations.

These two constraints depict a design spectrum for aspect-oriented programming (AOP) implementations. The first constraint defines the concern of balancing modeling expressiveness of aspect-oriented languages with efficient implementation. The second constraint balances the breadth of application of an AOP language with its efficient implementation with respect to a particular computational environment.

THE CHAPTERS

In the first chapter in this part, Chapter 2, "Aspect-Oriented Programming Is Quantification and Obliviousness," Robert Filman and Daniel Friedman introduce the terms quantification and obliviousness as the two distinguishing characteristics of aspect-oriented languages. The term quantification refers to the fact that the expression of a single aspect may affect multiple program modules; the term obliviousness refers to the fact that the affected modules do not contain any particular notation preparing them for this action. Within that framework, the chapter presents several dimensions for the understanding of aspect-oriented languages, the most important of which is whether the quantification is to take the place of the structural (static) or behavioral (dynamic) properties of the system. The chapter then compares various (non-aspect oriented) languages and techniques with respect to the concepts of quantification and obliviousness. These ideas echo our notions of the important features of systemic models and component models in traditional engineering disciplines.

In Chapter 3, "N Degrees of Separation: Multi-Dimensional Separation of Concerns Degrees of Separation: Multi-Dimensional Separation of Concerns," Peri Tarr, Harold Ossher, Stanley M. Sutton, Jr., and William Harrison emphasize the value of defining simultaneous models incorporating separate concerns. In this approach, models are built by using primitive units (for example, operations and attributes) and compound units (for example, classes). Each model expresses a solution along a given dimension (for example, persistency). Hyperslices are used to capture such models. Multiple hyperslices may exist simultaneously; different hyperslices can conflict with each other. Models in different hyperslices are integrated into a single system through the application of composition rules. These rules also specify how to resolve conflicts. This approach allows defining multiple models simultaneously; the flexibility of the composition rules avoids differentiating between aspects and components.

In the software engineering literature, there have been several attempts for representing conflicting design models simultaneously [2, 3, 5]. However, most of these approaches cover only the early phases of the software development process. This contrasts with the hyperslice approach, where tools such as Hyper/J support translating hyperslices to executable programs.

From the perspective of the abstractness constraint, the canonical model of this approach is quite general. It can be considered more of a design model than a language model. Specific implementations of this approach choose specific definitions of the primitives, compound units, and composition rules. The standardization constraint is determined by the available implementation tools. For example, Java is the implementation platform for the hyperslice language Hyper/J. Currently defined models in hyperslices and the composition rules restrict this approach to static composition of aspects. Hyperslices traces its intellectual roots to work in the early 1990s on subject-oriented programming [6].

Composition filters, described by Lodewijk Bergmans and Mehmet Aks¸it in Chapter 4, "Principles and Design Rationale of Composition Filters," introduce a linguistic construct, the concern, as a mechanism for uniformly expressing crosscutting and non-crosscutting concerns. Concern specifications have three (optional) parts: filter modules, a superimposition specification, and an implementation. A concern specification with just an implementation corresponds to an element (such as a class) in a conventional program. Filter modules are used to express crosscutting concerns. They are specified using a declarative message-manipulation language based on a set of primitive and/or user-defined filter types. Filter modules are attached to object instances[1] through a declarative constraint-based superimposition specification. Multiple concerns may superimpose their filter modules on the same object.

[1] In principle, filter modules can be superimposed on any identifiable program piece that interacts with other program pieces through operation calls.

The canonical model of composition filters assumes that aspects can be added to objects by manipulating the interaction patterns of objects. Message manipulation is achieved by superimposing filter modules, using the composition operators of the filter module specification language. The composition filters approach supports a variety of filter types, ranging from primitive language-independent filters to stateful user-defined types.

Composition filters aim at language independence and easy verification of aspect compositions. Due to their declarative nature (and depending on the adopted implementation technique), aspects can be composed at compile time, runtime, or both.

From the perspective of the abstractness constraint, this approach is limited to adding aspects only at the level of interacting program modules. The standardization constraint of composition filters is a common interaction interface for multiple languages.

The composition filters model goes back to the late 1980s. In the ensuing years, this model evolved from hardwired filter modules on singular objects towards flexible superimposition of separately specified filter modules on multiple objects.

The next two chapters illustrate the third major family of aspect-oriented languages. In Chapter 5, "AOP: A Historic Perspective (What's in a Name)," Cristina Lopes elucidates the derivation of the word "aspect" in the context of separating software concerns. This chapter presents a personal history of her work on the topic, starting with her graduate studies at Northeastern, progressing to specialized aspect languages for particular concerns like distribution, and culminating with her work at Xerox PARC on the team that developed AspectJ.

In Chapter 6, "AspectJ," Adrian Colyer provides a brief introduction to the goals of the AspectJ project and gives an overview of the AspectJ language and supporting tools. AspectJ is a general-purpose programming language. AspectJ and AspectJ tools are designed to be easy to integrate into existing Java-based environments. Today, AspectJ is by far the most popular aspect-oriented language in use.

The most important extension to Java in AspectJ is the concept of join points. Join points are identifiable events in the execution of the program. Pointcut expressions are used to refer to the join points in an AspectJ program. For example, a pointcut expression may specify join points for the execution of a method or constructor, a call to a method or constructor, read or write access to a field, etc. Advice is a set of program statements that are executed at join points referred to by a pointcut specification. AspectJ has three kinds of advice: before advice, after advice, and around advice. These execute before, after, and around join points matched by the advice's pointcut expression. Aspects are the unit of modularity by which AspectJ implements crosscutting concerns. Aspects can contain methods and fields, pointcuts, advice, and inter-type declarations. Inter-type declaration mechanisms allow an aspect to provide an implementation of methods and fields on behalf of other types.

The canonical model of AspectJ is derived from the Java language. The abstractness constraint of the language is determined by the combination of pointcut specifications, advices, and inter-type declarations. The standardization constraint of the language is determined by the compilation mechanisms of AspectJ within Java environments.

In Chapter 7, "Coupling Aspect-Oriented and Adaptive Programming," Karl Lieberherr and David Lorenz contrast and unify their work on adaptive programming with AOP. The abstract data type/object-oriented tradition argues for reducing the mutual interdependence of code by hiding implementations within abstractions. However, in practice that is not sufficient to keep components from becoming dependent on each other's structural implementationthe actual parts of composite objects often become visible to code using those objects. Adaptive programming argues that a major cause of increased software coupling is due to complex interactions along the nested object structures. The canonical model of this approach is based on a succinct graph-based representation of object and class hierarchies with a set of graph manipulation operations. Aspects can be superimposed over the graph structures using these operations. The abstractness constraint of this approach is determined by the succinct graph models. The standardization constraint is defined by the algorithms that transform conventional (object-oriented) programs to the graph representations and vice versa.

The authors claim that both adaptive programming and AOP can benefit from each other: adaptive programming concepts can enhance the separation of concern characteristics of aspect-oriented languages, and AOP, as illustrated by AspectJ, can provide a better implementation platform for adaptive programming.

In Chapter 8, "Untangling Crosscutting Models with CAESAR," Mira Mezini and Klaus Ostermann assert that conventional AOP languages are too limited in abstracting and reusing aspects. The programming model CAESAR provides constructs for better aspect modularization, for flexibly binding aspects to different implementation modules, and for dynamically combining aspects into a running system. The reuse mechanism is called aspectual polymorphism; it generalizes subtype polymorphism to aspect-oriented models. The model uses virtual classes and family polymorphism to help enable type-safe reuse.

The canonical model of CAESAR is a novel generalization of the object-oriented model to the aspect-oriented model. Its abstractness constraint is mainly determined by the language construct called Aspect Collaboration Interface, the unit of aspect reuse. Its standardization constraint is determined by the algorithms that translate CAESAR to existing languages and run-time environments.

Chapter 9, "Trace-Based Aspects," by Remi Douence, Pascal Fradet, and Mario Südholt, argues that conventional object-oriented languages are inadequate for expressing relations between execution events of aspects. To overcome this limitation, this paper introduces an event-based crosscut specification, where events refer to the relevant states of program execution. With this model, one can invoke aspects based on the dynamic history of program execution. Crosscut specifications are controlled by event matching and propositional and sequential logic.

From the abstractness constraint point of view, the canonical model of this approach is a state-based crosscut program. The standardization constraint is determined by the algorithms and techniques that capture the state of executions.

Chapter 10, "Using Mixin Technology to Improve Modularity," by Richard Cardone and Calvin Lin, revisits the themes of the ability of multiple-inheritance-like mechanisms to add concerns to an object system from Chapters 2 and 3 and aspectual polymorphism from Chapter 8. In conventional object systems like Java (or at least Java before Java 5), the types of a system are constants. Mechanisms such as C++ templates and Ada and Java 5 generics allow constructing types by instantiating generic type descriptions. Mixins carry this idea of instantiated types into the inheritance mechanism. That is, one can parameterize with respect to the superclasses of a type. Cardone and Lin describe mixins and show how mixins can be used for aspects. Just as CAESAR in Chapter 8 was concerned with coordinating the instantiation of multiple types, Cardone and Lin describe a layering mechanism for collecting related mixins together. They illustrate the power of this mechanism with a graphical user interface library that can be parameterized with respect to its target device.

In Chapter 11, "Separating Concerns with First-Class Namespaces," Oscar Nierstrasz and Franz Achermann present a model of separation of concerns based on forms, a structure of first-class namespaces, agents, or processes, and channels, communication structures between processes. This work strives for a canonical model that is capable of expressing a large category of language abstractions. A critical element of this model is that forms can describe both required and provided operations of a component, thereby enabling statically checking the consistency of architectural structures. The standardization constraint of this approach is determined by a layered set of implementation techniques, where forms, agents, channels, and services define the most concrete layer. Nierstrasz and Achermann demonstrate how their model can be used to describe mixins and mixin layers.

The last five chapters of Part 1 turn toward issues of implementing aspect-oriented languages. Several of the systems described in the earlier chapters can be understood as compiler-based systems that take descriptions of base-level code, aspects, and some language mapping the aspects to the base-level code (though symmetric systems treat the base-level code and aspects as the same stuff, and the mapping commands in some languages are intermixed with the code). These compilers output object code that implements the desired functionality. There are four primary alternatives to this "compilation-based" approach. The first, illustrated in Chapter 12, "Supporting AOP Using Reflection," by Noury Bouraqadi and Thomas Ledoux, modifies the process of interpreting a program to intermix aspect behavior. Controlling the execution of interpreter of a language is meta-programming; Bouraqadi and Ledoux provide an introduction to meta-programming and illustrate how meta-programming can be used to support aspects.

The second implementation technology is to wrap base-level components with aspects. One way of implementing composition filters (Chapter 4) relies directly on such a wrapping. A similar but decidedly more programmatic approach is illustrated in Chapter 13, "Inserting Ilities by Controlling Communications," by Robert E. Filman, Stu Barrett, Diana Lee, and Ted Linden, which describes the Object Infrastructure Framework (OIF). OIF is based on wrapping communicating components; using the wrappers to manipulate their communications. The OIF approach is programmatic: wrappers are themselves objects; they embody aspects and can be dynamically apportioned around base-level components. The system supports mechanisms such as intra-aspect and inter-aspect and base component communication channels to aid the pragmatics of programming. The composition languages of composition filters emphasize expressing permanent semantic relationships about the base components; the composition language in OIF describes default initializations.

The next two chapters of this part are unified by the theme of modifying or manipulating the output of the ordinary compilation process to achieve aspects. In Chapter 14, "Using Bytecode Transformation to Integrate New Features," Geoff Cohen describes one of the earlier implementations of this idea. Bytecode transformation adds concerns by modifying compiled Java classfiles. Günter Kniesel, Pascal Costanza, and Michael Austermann extend this idea in Chapter 15, "JManglerA Powerful Back-End for Aspect-Oriented Programming." JMangler provides class-loading-time bytecode transformation. Transformation at load-time ensures that no code that has avoided aspectization can run in the system, an important security and integrity guarantee. Both bytecode transformation mechanisms take advantage of the fact that some operations of interest to aspects are more explicitly visible in the object code than in the source.

The final chapter of Part 1, Chapter 16, "Aspect-Oriented Software Development with Java Aspect Components," by Renaud Pawlak, Lionel Seinturier, Laurence Duchien, Laurent Martelli, Fabrice Legond-Aubry, and Gérard Florin, builds on the natural dynamics of run-time class-loading manipulation to describe the final implementation technology, framework-based mechanisms. With a framework, the application program "fits in," in some sense, to a run-time environment. The JAC framework is powered by run-time bytecode transformation and features elements such as containers, instance-based methods, and dynamic reconfiguration. As a lead-in to Part 2, "Software Engineering," this chapter also describes a UML-based process for building JAC applications.

DIMENSIONS OF ASPECT LANGUAGE DESIGN

One of the advantages of bringing together this collection of papers is the opportunity to perform a comparative analysis of the different approaches. Dimensions to consider for each system include:

Symmetry versus asymmetry. Does the model differentiate between "base-level" and "aspect" elements, or is all programming done with a single kind of stuff? Proponents of the latter have characterized the former as supporting "the tyranny of the dominant decomposition," though many programmers may find it comforting to have a dominant decomposition to program against.

Join point models. At which points in the program can aspect behavior be introduced? Critical considerations in this element include join points that refer to the structural or dynamic properties of program execution and which structural or dynamic features are accessible.

Composition mechanism. What linguistic mechanisms does the system provide for describing where aspects should be applied? Choices include providing a separate language for describing the composition, intermixing composition commands with the actual code, and making composition an executable operation. Composition also interacts with object-oriented inheritance. A worthwhile side note is the issue of whether aspects can be applied to aspects. This is important because we don't want the semantic guarantees that aspects provide to be subverted by the aspect system itself. That is, if using the "security" aspect ensures that a system is secure, we don't want to run aspects that have not also been made secure.

Quantification. What mechanisms does the programmer have for making systematic application of aspects? What kinds of predicates can the programmer use to describe a class of situations that call for an aspect? Examples of such predicates include elements with the same syntactic structure (e.g., method calls, conditionals), elements with the same lexical structure (e.g., things with the same name or name prefix), elements with the same semantic structure (e.g., all methods that support a particular interface), and elements with the same execution history (e.g., all events of a particular type, or all events that match a particular temporal logic).

Encapsulation. Consider what mechanisms the language provides for constraining the visibility and effect of aspects and aspect interactions.

Type safety. To what extent does the language provide static mechanisms for checking the compatibility of composed components? The languages and models in this part range from almost no such checking to elaborate multi-level organizations for guaranteeing mutually and collectively appropriate roles.

Obliviousness. Does code to which an aspect is to be applied need to be prepared in any particular way? Though there are no examples of this in the following papers, is there a way to protect a component from aspect application?

Domain specificity. Does the language provide general aspect-oriented programming (in the sense that Java or C++ is a general-purpose language), or is it restricted or enhanced for a particular class of problems?

Reuse and aspect parameterization. To what extent can one create aspects that are reusable in multiple contexts? How can such aspects be parameterized or specialized to the needs of a particular context?

Conflict resolution. Are there mechanisms to describe and resolve possible conflicts from the use of multiple aspects? A common example of an aspect is a "logging aspect" that records that something has happened in a software system. This is particularly useful for debugging, accounting, and security auditing. Transactions (in the database sense) have also been proposed as good for aspects. Transactions, on the other hand, are supposed to either totally complete or leave no trace of their failure. Logging and transaction aspects thus conflict.

Legacy relationships. Is the proposed system meant to augment an existing programming environment or extend that environment, or does the model require the programmer to employ entirely new constructs?

Run-time aspect dynamics. Can the aspects in use be altered at runtime, or are they fixed before program execution, typically by a compilation process? Are the quantification targets of aspects (which aspects apply to which situations) fixed, or can they be altered at runtime?

Analyzability. How does the aspect language affect the ability to statically analyze the source system?

Debugability. How does the aspect language affect the ability to debug a system?

Testability. Can aspects be tested without base code?

Software process. What does the particular language have to say about the process of its use?

Implementation mechanism. How can the language be implemented? As we discussed previously, the following chapters have examples of compilation-based systems, reflection, frameworks, wrapping, and bytecode transformation-based systems.

Available run-time environment. What additional mechanisms does the system provide to support run-time applications?

Every aspect language or model makes choices in each of these dimensions. These dimensions are separate but not completely orthogonal. The complexity of aspect language design is the compound interactions of making a set of choices that give the language its unique design structure and application.

REFERENCES

1. Akit, M. 2003. 2. Akit, M. and Marcelloni, F. 2001. Concurrency and Computation: Practice and Experience, 13, 12471279.

3. Balzer, R. 1991. Tolerating inconsistency, In 13th Int'l Conf. of Software Engineering (ICSE), (Austin). IEEE.158163.

4. Birtwistle, G. M., Dahl, O. J., Myhrhaug, B., and Nygaard, K. 1973. Simula Begin, Auerbach, Philadelphia.

5. Finkelstein, A. C. W., Gabbay, D., Hunter, A., Kramer, J., and Nuseibeh, B. 1994. Inconsistency handling in multiperspective specifications. IEEE Transactions on Software Engineering, 20, 569578.

6. Harrison, W. and Ossher, H . 1993. Subject-oriented programminga critique of pure objects. In 8th Conf. Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), (Washington, D.C.). ACM, 411428.

7. Maimon, O. and Braha, D. 1996. On the complexity of the design synthesis problem, IEEE Transactions on Systems, Man, And Cybernetics Part A: Systems and Humans, 26, 1, 141150.

8. Tekinerdogan, B. 2000. On the notion of software engineering: a problem solving perspective. Chapter 2. In Synthesis-based software architecture design, Ph.D. thesis, University of Twente (Enschede).




Aspect-Oriented Software Development
Aspect-Oriented Software Development with Use Cases
ISBN: 0321268881
EAN: 2147483647
Year: 2003
Pages: 307

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net