4.3 Software Architecture


4.3 Software Architecture

Architecture is an early phase of software design, following requirements gathering and preceding implementation (see section 4.2.1). Architecture is particularly important because it is closely tied to industrial organization (see chapter 7) and project development organization. The industry layering described in section 2.2.5 is an example of a system architecture, one that hierarchically builds up capability, starting with devices incorporated into equipment, and equipment supporting software. Here the focus is on the software portion of the system.

4.3.1 Why Software Architecture Is Needed

Rather than physical, the constraints on what can be done with software are financial and human. Software design often stretches the capability of its human designers to absorb and manage complexity and to organize large collaborative efforts to create ever more complex systems, including architecture and tools.

One challenge is containing the communication and coordination costs among members of a large development team (what economists call transaction costs). An early attempt to characterize this issue was Brooks' law (1975). If we assign programmers to a software project, we might expect on the order of n times faster results. Unfortunately, there are potentially n(n 1)/2 distinct interactions among pairs of these programmers,[3] so Brooks pessimistically asserted that the transaction costs increase on the order of n2. A pragmatic statement of Brooks' law is, "If a software project is behind schedule, adding more people will push it even more behind schedule." In this worst-case scenario, transaction costs consume a growing fraction of each individual's effort (growing on the order of n), so in economics terms there are diminishing returns to scale. While Brooks' law is grossly pessimistic, a diminishing returns to scale in software projects is observed.

This same problem is encountered when considering the economy as a whole. The economy is partitioned into individual industrial firms that operate as independently as possible, engaging in commercial transactions where appropriate. The most efficient size of the firms depends on the trade-off between external and internal transaction costs—the effect (if not the explicit goal) is to group in one firm activities that have a high dependence and would create excessive transaction costs if performed externally.

The software industry encounters these same issues of the appropriate size of firms and minimizing transaction costs (see chapter 7). Similar issues arise at a smaller granularity in the development of a single piece of software or a set of complementary pieces of software. Should this software be developed by a single firm, or should multiple firms arise to contribute composable pieces? These issues are largely resolved by market forces.

Organizational theory has much to offer and in particular would assert that transaction costs can be reduced through mechanisms like hierarchical management structures and shadow pricing. Nevertheless, the effectiveness of these modern management techniques still depends on the nature of the task being undertaken: if it really is true (in the extreme) that the activities of project team members are deeply intertwined, then in that case organizational techniques will be less effective than if the overall project can be broken into largely independent pieces. This a role of software architecture, which imposes a ‘divide and conquer’ methodology in which the overall software system is divided into manageable pieces, and each developed independently. This can be viewed at different granularities, such as (in the small) within a software development group and (in the large) among companies in the software industry. In software (as in other branches of engineering) these pieces are called modules, and the act of breaking the system into modules is called decomposition or modularization. Architectural decomposition is the first step leading to an effective project's organizational structure that can tame Brooks' law.

The term architecture in software can create some confusion when compared to the use of that term elsewhere.

Example One of the roles of the building architect is to decompose a building into functional units (rooms, hallways), to meet the eventual occupants' needs. This is the general sense in which architecture is used in software as well. However, another role of the building architect is to address usability and aesthetics, which is not a role of an architect in software. In fact, decomposition in building architecture is driven in no small part by the needs of future building occupants, whereas software architecture is primarily for purpose of organizing the development process and has a minimal relation to user needs. For software that is required to be extensible or evolvable after being deployed by users, this distinction is fuzzier. For such software, the architectural decomposition is guided by what parts might need to be added or changed after deployment, and it puts rules in place that enables safe extension and evolution.

There is a strong relation between architecture and the organization of the software development teams. Since only one implementation team is assigned to each module, the boundaries of team responsibility generally follow the boundaries of modules. This leads to Conway's law (named after Melvin Conway, an early programmer), which states that the organization of software will be congruent to the organization of its development team. In Conway's day the organization of the team preceded and largely determined the organization of the software; today, the opposite is typically true: architecture precedes organization.

A capable architect is necessary to complete a software project successfully. The architect has global knowledge of the project and its pieces, and how they fit together. Most other participants purposely have only local knowledge of the piece within their domain of responsibility.

Software from different companies frequently must compose, and the boundary of modules in cross-firm composition has to follow the boundaries of the firms: shared responsibility for a module isn't viable. An interesting question is how market forces determine these module boundaries (see chapter 7). The primary mechanism by which companies coordinate their efforts at achieving the composability of modules across firm boundaries is either standardization or through permitting documented extensions. Large corporations may intentionally relax central control and allow for degrees of redundancy and competition among suborganizations. The resulting software boundaries are somewhere between architected ones and those driven by market forces.

As mentioned earlier, most software projects don't start from scratch but must build on available assets. Where existing code is incorporated, architecture can provide a structured and principled approach to coordinating subsequent and earlier efforts. Instead of relying on developers to "discover" that some available code or component may be reused in new situations, software systems are designed to be related by construction. Architecture is the level of design that emphasizes such relations (Bass, Clewents, and Kazman 1998; Bosch 2000; Szyperski 1998).

4.3.2 The Role of Software Architecture

The primary role of software architecture is to address systemwide properties by providing an overall design framework for a family of software systems. Systemwide properties can be compared to those established by the load-bearing walls and superstructure of a building. Concrete designs then follow the architecture's guidelines, complementing it with concrete local design decisions. Architecture decomposes systems into well-identified modules, describes their mutual dependencies and interactions, and specifies the parameters that determine the architecture's degrees of configurability. As illustrated in figure 4.2, architecture has three facets: the decomposition of the system into modules, the functionality of each module, and the interaction among modules. Global system properties (also known as system qualities), such as performance, maintainability, extensibility, and usability, emerge from the concrete composition of modules (Thompson 1998).[4]

click to expand
Figure 4.2: A simple software architecture.

4.3.3 Modularity

Modular is a term describing architectures with desirable properties with respect to supporting a good development methodology and containing complexity (Baker and Zweben 1979; Jung and Choi 1999; Parmar 1972). While modularity originally implied nothing more than "an implementation broken down into pieces," over time the term has assumed more connotations as experience has taught us what properties are most effective in taming Brooks' law. Today, the following properties are considered most important for a modular architecture:

  • Strong cohesion. The modules have strong internal dependencies, requiring a high degree of communication and coordination (transaction costs, as an economist would say). A module not having this property is a candidate to be further decomposed. This is precisely the role of hierarchical decomposition.

  • Weak coupling. There should be weak dependencies across module boundaries, so teams implementing different modules have minimal need for coordination. This is the most important property.

  • Interface abstraction. Abstraction is a difficult concept to convey precisely, especially without displaying examples, but roughly it means that the external view of a module should be as simple as possible, displaying essential properties and hiding unnecessary details. Abstraction makes module functionality easier to understand and use, and also contributes to modules that are more general and thus more likely to be reused.

  • Encapsulation. Internal implementation details are invisible and untouchable from the outside of a module. This precludes inadvertent dependencies among modules created by exploiting implementation details, where such dependencies would cause changes in implementation to unnecessarily affect other modules or cause system defects. A first line of defense in encapsulation is to distribute module object code only, because source code makes implementation details more visible. (Encapsulation is thus compromised for source code-available software, a weakness in that methodology.) Fully effective encapsulation also requires support from languages and tools.[5]

These four properties are primarily aimed at taming Brooks' law. Encapsulation has significant implications for the business model of software (and other products) as well (see chapter 7). Modularity is critical to decomposing a large software system into distinct vendor efforts that are later composed. Commercial software firms do not want their customers modifying software because support of a customer-modified program would be difficult or impossible, and support contributes to customer satisfaction and repeat sales. Encapsulation is familiar in many manufactured products for similar reasons ("your warranty is voided if the cabinet is opened").

As illustrated in figure 4.3, modular architectures are usually constructed hierarchically, with modules themselves composed of finer-grain modules. The granularity refers roughly to the scope of functionality comprising one module, from course granularity (lots of functionality) to fine granularity (limited functionality). Hierarchical decomposition enables the same system to be viewed at different granularities, addressing the tension between a coarse-grain view (the interaction of relatively few modules is easier to understand) and a fine-grain view (small modules are easier to implement). Of course, the cohesion of modules is inevitably stronger at the bottom of the hierarchy than at the top; otherwise, further decomposition would be self-defeating. The coarse-grain modularity at the top is a concession to human understanding and to industrial organization, whereas the fine-grain modularity at the bottom is a concession to ease of implementation. The possibility of hierarchical decomposition makes strong cohesion a less important property of modularity than weak coupling.

click to expand
Figure 4.3: Hierarchical decomposition.

As suggested by this discussion, software architecture has interesting parallels in the design of human organizations (Baldwin and Clark 1997; Langlois and Robertson 1992; Langelaar, Setyawan, and Lagendijk 2000; S$anehez and Mahoney 1996). Similar principles of modularity are applied there.

4.3.4 Interfaces and APIs

The interaction among modules focuses on interfaces. A module interface roughly speaking, tells other modules all they need to know to use this module. Interfaces are a key mechanism for coordinating different development groups (or companies) that build composable modules. Interfaces are a frequent target for standardization (see chapter 7).

More precisely, an interface specifies a collection of atomic actions (with associated data parameters and data returns). By atomic, we mean an action that cannot be decomposed for other purposes—it must be invoked as a whole or not at all—although it can often be customized by parameterization.

Example Consider the user interface of a four-function calculator as a module interface. (The idea of module interface is applicable to the user interface as well, if one unromantically characterizes a user as a "module.") Actions for this interface are functions like "add this number to the display register," where a parameter is a number punched into the keyboard and the action is invoked by the plus key.

The interface also specifies protocols, which are compositions of actions required to accomplish more functionality than offered by individual actions. A protocol typically coordinates a sequence of back-and-forth operations between two or more modules that can't be realized as a single action. Multiple protocols may reuse a given action; making actions elementary and building more complex functionality with protocols contributes to software reuse (see chapter 7). By design, protocols are not atomic, and actions can be reused in different protocols.

Example For the calculator example, adding two numbers cannot be accomplished by a single action but requires a protocol consisting of two actions: "punch first operand and enter to display register, enter second operand and punch add." The result of the addition will appear in the display register. The "punch operand and enter" action could be reused by other protocols, for example, a protocol that multiplies two numbers. Most protocols are far more complicated than this example would suggest; in fact, protocol design is a complex specialty in its own right.

The second purpose of an interface is to tell a module developer what to implement. Each action is implemented as an operation on internal data and often requires invoking actions on third-party modules. Thus, in a typical scenario one module invokes an action on another, which as a result invokes an action on a third, and so on. Through encapsulation, an interface should hide irrelevant internal implementation details and preclude bypassing the interface to expose implementation details.[6]

An interface is open when it is available, well documented, and unencumbered by intellectual property restrictions (see chapter 8). Interfaces remain an important manifestation of modularity within proprietary software, but open interfaces are nonproprietary and are intended to be invoked by customers or by software from other vendors. There are different degrees of openness; for example, an interface may be encumbered by patents that are licensed freely to all comers; or there may be moderate license fees, the same for all, and the interface might still be considered open.

Example Modem manufacturers provide an open interface that allows other software to send and receive data over a communication link. This open interface can be used by various vendors of communication software to make use of that communication link. From a user perspective, the ability to use many communication software packages with a modem is valuable, which in turn encourages the modem manufacturer to ensure the availability of many complementary software packages.

An open interface designed to accept a broad class of extensions is called an application programming interface (API). By extension, we mean a module or modules added later, not only following development but following deployment. The API terminology arose because the interface between an application and an operating system was its first instance: applications can be considered extensions of infrastructure (operating system). Today, the acronym API is used more generally: API may allow one application to extend another or one piece of infrastructure to extend another. It helps that the acronym has largely replaced the words it represents in the common vernacular.

Example A Web browser accommodates plug-ins, which can extend its functionality. For example, Shockwave supports animated objects in Web presentations. Web developers must purchase Shockwave development tools to embed these animated objects, and users must download and install the free Shockwave plug-in to view them. The Web browser provides an API that accommodates various plug-ins including Shockwave; some of the plug-ins may not have existed at the time the API was defined, the browser was programmed, or the browser was distributed to the user.

A firm that provides and documents an API is opening up its software to extension by other firms, much as a TV manufacturer opens up its display device to a variety of video sources. This is a serious strategic business decision, and it may be irreversible as both a matter of business (customers will be upset if software products suddenly fail to compose) and law (in some circumstances this may be considered an unfair business practice; see chapter 8). APIs form the basis of open systems—software systems with ample APIs, giving customers rich possibilities for extension and allowing them to mix and match solutions from different suppliers, thus reducing switching costs and lock-in (see chapter 9). The growing popularity of open systems is one of the most dramatic trends in the software industry over the past few decades (see chapter 7).

4.3.5 Emergence

Architecture and development focus on defining and implementing modules that can later be composed (see section 3.2.12). The act of composing modules and doing whatever it takes to make them work properly together (including modifying the modules themselves as well as configuring and extending them) is called integration. A single module in isolation does not normally provide self-contained capability useful to a user; it must be composed with other modules. Any additional functionality that arises from the composition of modules is called emergence: if available, the integrated whole is more than the sum of its modular parts.[7]

Example A classic example of emergence in the material world is the airplane, which is able to fly even though each of its subsystems (wings, engines, wheels) cannot. The flying behavior is said to emerge from the composition of those subsystems. In a computer system, neither a computer nor a printer can print out a document; that emerges from the composition of the computer and printer.

Note that emergence is different from extension. Emergence arises from the composition of existing modules, whereas extension proceeds from full knowledge of an existing module and adds new capabilities to it. This distinction is crucial to the understanding of components (see chapter 7).

Emergence is a source of value in the development process. Module composition and the emergent behavior it triggers are a source of value. (Firms called system integrators specialize in module integration: the value they add is emergence; see chapter 6).

4.3.6 Achieving Composability

In many instances, modules must be composed even if they weren't implemented within the same development project or organization, or designed explicitly to work together. Examples include two applications purchased from different suppliers that must work together (see section 3.2.13) or software components (see chapter 7). This leads to two fundamentally different approaches to architecture: decomposition from system requirements and composition from available components.

Composability of modules is actually difficult to achieve, especially for modules not designed and developed in the same project. It requires two properties: interoperability and complementarity.

For two modules to communicate in a meaningful way, three requirements must be met. First, some communication infrastructure must enable the physical transfer of bits.[8] Second, the two modules need to agree on a protocol that can be used to request communication, signal completion, and so on. Finally, the actual messages communicated must be encoded in a mutually understood way. Modules meeting these three requirements are said to be interoperable.

Mere interoperability says nothing about the meaningfulness of communication. To enable useful communication, the modules need to complement each other in terms of what functions and capabilities they provide and how they provide them.

Example Imagine a facsimile machine that calls an answering machine, which then answers and stores the representation of the facsimile in its memory.[9] Anyone observing either the facsimile or answering machine would conclude that they were interoperable, but in fact no image has been conveyed because the functionalities aren't complementary. The answering machine has no capability to display or print the stored facsimile, and the facsimile machine can't create an audible message to communicate something meaningful to the user.

Modules that are interoperable and complementary (with respect to some specific opportunity) are said to be composable (with respect to that opportunity). Composable modules offer additional value through the emergence resulting from composition.

Example The Web browser (on a user's desktop computes) and the complementary Web server (an information supplier), together provide an example of interoperability and complementarity. The browser and server must be interoperable to exchange and display document pages. In a more complicated scenario, Web pages contain interactive elements like forms that accept information, implemented by splitting functionality between the browser and the server, which complement one other with respect to this opportunity. The browser and server compose to provide capabilities that neither provides individually. In a more involved scenario, the browser provides an API for plug-in modules downloaded from the server, enabled by rich composition (interoperability and complementarity) standards between plug-in and browser.

Usability (see section 3.2.7) can be considered a form of composability of a user and a software application.

[3]This is essentially the same observation that underlies Metcalf's law of network effects (see chapter 9).

[4]Where subsystem composition is guided by architecture, those system properties that were successfully considered by the architect are achieved by construction rather than by observing randomly emerging composition properties. For example, a security architecture may put reliable trust classifications in place that prevent critical subsystems from relying on arbitrary other subsystems. Otherwise, following this example, the security of an overall system often is only as strong as its weakest link.

[5]Fully effective encapsulation mandates that implementation details not be observable from the outside. While this is a desirable goal, in the extreme it is unattainable. Simply by running tests against a module, or in the course of debugging during a module integration phase, a module will reveal results that allow a programmer to conclude properties of the module's implementation not stated in its module's abstraction.

[6]Interfaces are the flip side (technically the dual) of an architect's global view of system properties. An interface determines the range of possible interactions between two modules interacting through that interface and thus narrows the viewpoint to strictly local properties. Architecture balances the views of local interaction and global properties by establishing module boundaries and regulating interaction across those boundaries through specified interfaces.

[7]Sometimes emergence is used to denote unexpected or unwelcome properties that arise from composition, especially in large-scale systems where very large numbers of modules are composed. Here we use the term to emphasize desired more than unexpected or unwanted behaviors.

[8]Bits cannot be moved on their own. What is actually moved are photons or electrons that encode the values of bits.

[9]This is a simplification of a real facsimile machine, which will attempt to negotiate with the far-end facsimile machine, and failing that will give up.




Software Ecosystems(c) Understanding an Indispensable Technology and Industry
Software Ecosystem: Understanding an Indispensable Technology and Industry
ISBN: 0262633310
EAN: 2147483647
Year: 2005
Pages: 145

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net