7.1 Industrial Organization of the Software Industry


7.1 Industrial Organization of the Software Industry

A relation between software architecture and industrial organization was pointed out in section 6.1; industry responsibility must follow interfaces of software modules at the top level of architectural decomposition. Is the architecture determined by the marketplace, or is industrial organization determined by architecture? What is the industrial organization of the software creation industry, and how is it changing, and why? These issues are dealt with in this chapter.

7.1.1 Applications and Infrastructure

The most fundamental architectural concept in software is the decomposition into application and infrastructure. With some notable exceptions, firms in the industry generally specialize in one or the other.

Example There are three major types of exceptions. One is firms that combine the businesses of infrastructure supplier and application software supplier, for instance, Apple (particularly historically) and Microsoft. They strongly encourage independent application software suppliers to use their platforms but also supply their own applications. Another exception is firms that combine the business of infrastructure software supply and consulting services, the latter focused on helping end-user organizations acquire and provision new applications. Examples are IBM and Compaq, the latter a merger (Compaq and Digital) specifically designed to combine these businesses. A third exception is firms that combine infrastructure software with contract development of applications, for instance, IBM.

While both applications and infrastructure require technical development skills, the core competencies are different. Applications focus the value proposition on end-users, and infrastructure provides value primarily to application developers and to operators. Applications are valued most of all for functionality and usability; their performance and technical characteristics are more dependent on infrastructure. It is advantageous to move as much technical capability to the infrastructure as possible, so application developers can focus on user needs. This leads to a natural maturation process whereby novel technical functionality is first pioneered in leading-edge applications and then migrates to enrich infrastructure, or applications evolve (at least partially) into infrastructural platforms for other applications.

Example The playing of audio and video media was originally built into specialized applications. Today much of the required support is found in common infrastructure, usually at the level of operating systems. Individual productivity applications (such as office suites) have been augmented with ever richer user programmability support, so many interesting specialized applications now build on them (see section 4.2.7).

From a business perspective, application software has the advantage over infrastructure of providing value directly to the end-user, who ultimately pays for everything in the software value chain. This direct relationship provides rich opportunities to differentiate from competitors and to leverage it for selling complementary products. Ceding this valuable direct relationship between supplier and user is a disadvantage of the application service provider model (from the supplier perspective) and also a motivation to become a service provider as well as a software supplier.

In some cases there are considerable marketplace obstacles to application adoption that make business difficult for application suppliers, such as lock-in and network effects (see chapter 9), but in other cases these are less important. Application software suppliers who provide variations on competitive applications find lock-in a greater obstacle but also benefit from a moderation of network effects, for instance, through providing backward compatibility or translations (Shapiro and Varian 1999b). There are many opportunities to pursue entirely new application categories, as illustrated by the recent explosion of Internet e-commerce.

As observed in section 3.1, applications are becoming increasingly numerous, diverse, and specialized. This is especially true of sociotechnical applications, which are often specific to the group or organizational mission they serve. This has several implications for industrial organization. First, a strong separation of applications and infrastructure reduces the barriers to entry to new application ideas. Where applications and infrastructure are supplied by separate firms, the latter find it advantageous to define open and well-documented application programming interfaces (APIs) that make it easier to develop applications, which in turn attracts application ideas from more sources and provides more diversity and competitive options to users. A good test of the application infrastructure separation is whether an application can be developed and deployed without the knowledge or cooperation of the infrastructure supplier or operator.

Second, application diversity is enhanced by doing whatever is necessary to make it faster, cheaper, and requiring less development skill. This includes incorporating more needed functionality in the infrastruture. Making use of software components (see section 7.3), and rapid prototyping and end-user programming methodologies (see section 4.2).

Third, application diversity is enhanced by an experimental approach seeking inexpensive ways to try out and refine new application ideas (see section 3.1.6). Applications should be a target for industrial and academic research, because a research environment is well suited to low-cost experiments and the refinement of ideas unfettered by the immediate goal of a commercially viable product (NRC 2000b) (see chapter 8). In reality, applications have traditionally not been an emphasis of the information technology (IT) research community for many reasons, including the importance of nontechnical considerations, the need for specific end-user domain knowledge, the difficulty of gaining access to users for experimentation, and the inherent difficulty in assessing experimental outcomes.

Fourth, innovative new applications are a good target for venture capital funding and startup companies. The funding of competing startups is a good mechanism for the market to explore alternative application approaches. Venture capitalists specialize in managing the high risks of new applications and have effective mechanisms to abandon as well as start new businesses. This should not rule out large company initiatives, but large companies are usually not attracted by the limited revenue potential of a specialized application, put off by the financial risks involved, and sensitive to the opportunity costs of tying up scarce development resources.

Returning to the separation of application and infrastructure, the successes here also build on the economics underlying infrastructure (see chapter 9). If a new application requires a new infrastructure, then the required investment (as well as investment risk) is much larger than if the application is built on existing infrastructure. Thus, the separation of applications from infrastructure reduces barriers to entry and encourages small companies.

Example The history of the telephone industry illustrates these factors. Telephone companies are historically application service providers with one primary application—telephony. They are also infrastructure service providers, providing not only the infrastructure supporting telephony but also data communications (e.g., the Internet) and video (e.g., broadcast television distribution). They have shown interest in expanding their application offerings, primarily in directions with mass market appeal. In the United States, the telephone industry has launched three major application initiatives of this character: video telephony (extending telephony to include video), videotext (an early proprietary version of the Web), and video-on-demand. In all three cases, the financial risk in deploying an expensive capital infrastructure to support a new application with uncertain market potential proved too great, and the efforts were abandoned. The telephone industry also illustrates numerous successes in deploying applications building on the existing telephony infrastructure, including products from independent suppliers like the facsimile machine and voiceband data modem.

The telecommunications industry strategy addresses one serious challenge following from the complementarity of applications and infrastructure and from indirect network effects: an infrastructure supporting a diversity of available applications offers more value to users, and an application utilizing a widely available infrastructure enjoys an inherently larger market. Industry thus faces the chicken-and-egg conundrum that a new infrastructure cannot be marketed without supported applications, and an application without a supporting infrastructure has no market. The telephone industry strategy has been to define a compelling application with mass market appeal and then to coordinate the investment in application and infrastructure, while making the infrastructure fairly specialized to support that application.[1]

The computer industry has generally followed the different strategy of deploying a generic infrastructure that supports a diversity of applications. In part this can be attributed to the culture of the industry, flowing from the original idea of programmable equipment whose application functionality is not determined at the time of manufacture. The Internet (a computer industry contribution to communication) followed a similar strategy; the core design philosophy for Internet technologies always valued low barriers to entry for new applications and a diversity of applications.

However, a pure strategy of deploying a generic infrastructure and waiting for applications to arrive is flawed because it does not address the issue of how to get infrastructure into the hands of enough users to create a market for applications that build on that infrastructure. The computer industry has found numerous ways to deal with this challenge (and has also suffered notable setbacks), all focused on making one or more compelling applications available to justify investment in infrastructure. An approach for totally new infrastructure is to initially bundle a set of applications with it, even while keeping the infrastructure generic and encouraging other application suppliers (e.g., the IBM PC and the Apple Macintosh were both bundled initially with a set of applications, and the Internet initially offered file transfer and e-mail). For infrastructure that has similar functionality to existing infrastructure, interoperability with older applications and offering higher performance characteristics for those older applications is another approach (e.g., layering; see section 7.1.3). Related to this, it is common for infrastructure to be incrementally expanded while maintaining backward compatibility for older applications. Application and infrastructure suppliers can explicitly coordinate themselves (e.g., by sharing product road maps; see section 7.2). Yet another approach is for applications to evolve into infrastructure by offering APIs or open internal interfaces (e.g., the Web; see section 7.1.2).

Another difference between the computer and telecommunications industries is the long-standing role of a service provider in telecommunications. Selling applications as a service bundled with a supporting infrastructure is advantageous in providing a single integrated solution to customers and freeing them of responsibility for provisioning and operation. The software industry is moving in this direction with the application service provider model.

The goal should be to combine the most desirable features of these models, and indeed the separation of application and infrastructure at the technological level is not inconsistent with a service provider model and a bundling of application and infrastructure as sold to the user. One of the trade-offs involved in these strategies is summarized in the fundamental relationship (Shapiro and Varian 1999b):

Revenue = Market share Market size.

An infrastructure that encourages and supports a diversity of applications exchanges market share (by ceding many applications to other suppliers or service providers) for an increase in total market size (by providing more diversity and value to users). Just as software suppliers must decide on their degree of application/infrastructure separation, service providers face similar issues. They can offer only applications bundled with infrastructure, or enhance the diversity of application offerings while ceding revenues and part of the customer relationship by giving third-party application providers access to their infrastructure. To maximize revenues in the latter case, use-based infrastructure pricing models can maximize the financial return from application diversity. These issues will become more prominent with the emerging Web services (see section 7.3).

7.1.2 Expanding Infrastructure

The growing cost of software development and the shortage of programming professionals concerns software development organizations. This is exacerbated by the increasing specialization and diversity of applications (see section 3.1.5); specialized applications may be economically feasible only if development costs can be contained. Several trends reduce developments costs, including improved tools, rapid development methodologies (see section 4.2), greater use of software components and frameworks (see section 7.3.6), and expanding infrastructure to make it cheaper and faster to develop and deploy applications.

The idea behind expanding infrastructure is to observe what kind of functionalities application developers reimplement over and over, and to capture those functionalities in a generic and flexible way within the infrastructure. It is important to capture these capabilities in a generic and general way so that they can meet the needs of a wide range of present and future applications. End-users for infrastructure software include application developers and operators.

Example Many applications need authentication and access control for the end-user (see section 5.4). Many aspects of this capability are generic and separated from the specific needs of each application. If authentication and access control are included within the infrastructure to be invoked by each application for its own purposes, reimplementation is avoided and users benefit directly by being authenticated only once for access to multiple applications.

These economic realities create an opportunity for the infrastructure to expand in capability over time. This may happen directly, or sometimes software developed as part of an application can be made available to other software and subsequently serve as infrastructure.

Example Early applications had to manage much of the graphical user interface on their own, but later this capability was moved to the operating system (initially in the Apple Macintosh). Software to format screen documents based on the Web markup language (HTML) was first developed in the Web browser but was also potentially useful to other applications (like e-mail, which frequently uses HTML to format message bodies). For example, Microsoft made this HTML display formatting available to other applications in its Windows operating system and to the system itself in displaying help screens. The company provided an API to HTML formatting within the Internet Explorer browser and included the Internet Explorer in the Windows software distribution.

Sometimes, an entire application that becomes ubiquitous and is frequently composed into other applications effectively moves into the infrastructure category.

Example The Web was originally conceived as an information access application for scholarly communities (World Wide Web Consortium 2002) but has evolved into an infrastructure supporting e-commerce and other applications. Many new distributed applications today incorporate the Web server and browser to present application-specific information to the user without requiring application-specific client software. Office suites are commonly used as a basis for custom applications serving vertical industry markets.

Valued-added infrastructure adds additional capability to an existing infrastructure.

Example A major area for innovation in infrastructure is middleware, defined roughly as infrastructure software that builds on and adds value to the existing network and operating system services. Middleware sits between the existing infrastructure and applications, calling upon existing infrastructure services to provide enhanced or extended services to applications. An example is message-oriented middleware, which adds numerous message queuing and prioritization services valuable to work flow applications.

Market forces encourage these additions because of the smaller incremental investments compared to starting anew and because of the ability to support legacy applications utilizing the earlier infrastructure. From a longer-term perspective, this is problematic in that it tends to set in stone decisions made earlier and to introduce unnecessary limitations, unless designers are unusually visionary. Economists call these path-dependent effects.

Example Early Internet research did not anticipate streaming audio and video services. The core Internet infrastructure therefore does not include mechanisms to ensure bounded delay for transported packets, a capability that would be useful for delay-sensitive applications like telephony or video conferencing.[2] While acceptable quality can be achieved without these delay guarantees, better quality could be achieved with them. Unfortunately, once a packet is delayed too much, there is no way to make up for this, as time moves in only one direction. Hence, no value-added infrastructure built on the existing Internet technologies can offer delay guarantees—a modification to the existing infrastructure is required. Value-added infrastructure lacks complete freedom to overcome earlier design choices, particularly in performance dimensions.

The chicken-and-egg conundrum—which comes first, the applications or the infrastructure they depend on—is a significant obstacle to establishing new infrastructure capability. One successful strategy has been to move infrastructure with a compelling suite of applications into the market simultaneously, presuming that even more applications will come later.

Example The Internet illustrates this, as it benefited from a couple of decades of refinement in the academic research community before commercialization. A key was developing and refining a suite of "killer apps" (e.g., file transfer, e-mail, Web browsing). This, plus an established substantial community of users, allowed the Internet to reach commercial viability and success quickly once it was made commercially available. This is an oft-cited example of the important role of government-funded research (see chapter 8), subsidizing experimentation and refinement of infrastructure and allowing a suite of compelling applications to be developed. Such externally funded experimental infrastructure is called a test bed for the new space to be populated.

Middleware illustrates another strategy. Applications and (future) infrastructure can be developed and sold as a bundle while maintaining strict modularity so that the infrastructure can later be unbundled and sold separately. A variation is to establish APIs to allow independent use of capabilities within an application.

Example A way to establish a message-oriented middleware (MOM) product might be to develop and bundle it with an enterprise work flow application, such as a purchase order and accounts payable application. By providing open APIs to the MOM capabilities, other application suppliers are encouraged to add application enhancements or new application capabilities that depend on the MOM. If this strategy is successful, eventually the MOM assumes a life of its own and can be unbundled and sold separately as infrastructure.

7.1.3 Vertical Heterogeneity: Layering

The modularity of infrastructure is changing in fundamental ways, driven primarily by the convergence of the computing (processing and storage) and telecommunications industries. By convergence, we mean two industries that were formerly independent becoming competitive, complementary, or both. This convergence is manifested primarily by the Internet's enabling of globally distributed software (see section 4.5), leading to applications that emphasize communication using distributed software (see section 3.1). This led to competing data networking solutions from the telecommunications and computer industries[3] and made networking complementary to processing and storage.

The top-level vertical architecture of both the telecommunications and computer industries prior to this convergence resembled a stovepipe (see figure 7.1). This architecture is based on market segmentation, defining different platforms for different application regimes. In the case of computing, mainframes, servers (originally minicomputers and later microprocessor-based) and desktop computers were introduced into distinct market segments (see table 2.3), each segment offering typically two or three major competitive platforms. Each segment and platform within that segment formed a separate marketplace, with its own applications and customers. Mainframes served back-office functions like accounting and payroll, servers supported client-server departmental functions like customer service, and desktop computers served individual productivity applications.

click to expand
Figure 7.1: Historically, the telecommunications and computing industry both used an architecture resembling a stovepipe.

Similarly, the telecommunications industry segmented the market by application or information medium into telephony, video, and data. Each of these media was viewed as a largely independent marketplace, with mostly separate infrastructure sharing some common facilities.

Example Telecommunications firms have always shared right-of-way for different applications and media, and also defined a digital multiplexing hierarchy (a recent example is SONET, or synchronous optical network) that supported a mixture of voice, data, and video services.

While the telecommunications and computer architectures look superficially similar, historically the approach has been different, primarily arising out of the importance of the service provider in telecommunications but not in computing. With notable exceptions, in telecommunications the infrastructure and application suppliers sold to service providers, who did the provisioning and operation, and the service providers sold application services (and occasionally infrastructure services) to users. In computing, it was common for infrastructure suppliers to sell directly to users or end-user organizations, who acquire (or develop themselves) applications and do their own provisioning and operation. This is partly due to the different cultures and the relative weakness of data networking technologies (necessary to sell application services based on processing and storage) in the early phases of the computer industry.

These distinct industry structures led to fundamental differences in business models. Firms in the telecommunications industry historically saw themselves as application service providers, viewed the application (like telephony or television-video distribution) as their business opportunity, and constructed a dedicated infrastructure for their application offerings. Infrastructure was a necessary cost of business to support applications, the primary business opportunity. Further, service providers viewed alternative applications and application suppliers as a competitive threat.

In contrast, the relatively minor role of a service provider in the computer industry and the cultural influence of the technical genesis of computing (programmability, and the separation of application from infrastructure) resulted in a strikingly different business model. Infrastructure and application suppliers sold independently to end-user organizations, and the users integrated the two. As a result, neither the application supplier nor the user perceived much freedom to define new infrastructure but focused on exploiting existing infrastructure technologies and products. The infrastructure supplier encouraged a diversity of complementary applications and application suppliers to increase the value of its infrastructure and simultaneously provide customers better price, quality, and performance through application competition.

To summarize the difference in the telecommunications and computing business strategies, in telecommunications the infrastructure chased the applications, whereas in computing the applications chased the infrastructure. While there are many necessary qualifications and notable exceptions to this statement, for the most part it rings true. In a sense, the telecommunications business model formed a clearer path to dealing with the indirect chicken-and-egg network effects mentioned earlier. Regardless of whether applications chase infrastructure or the reverse, investments in new infrastructure technologies have to proceed on faith that there will be successful applications to exploit new infrastructure. In the telecommunications industry this was accomplished by directly coordinated investments, and in the computer industry an initial suite of applications was viewed as the cost of establishing a new infrastructure.

Example To complement its PC, IBM initially supplied a suite of personal productivity applications, as did Apple Computer with the Macintosh. In both cases, open APIs in the operating system encouraged outside application developers, and it was not long before other application software suppliers supplanted the original infrastructure supplier's offerings (particularly for the PC).

This is all historical perspective, and not an accurate view of the situation today, in part because of the convergence of these two industries. The infrastructure has shifted away from a stovepipe form and toward a horizontal architecture called layering. The layered architecture organizes functionality as horizontal layers (see figure 7.2), each layer elaborating or specializing the functionality of the layer below. Each layer focuses on supporting a broad class of applications and users rather than attempting to segment the market. A natural way to enhance and extend the infrastructure is to add a new layer on top of existing layers. If applications are permitted to directly access services from lower layers, the addition of a new layer does not disrupt existing applications but creates opportunities for new applications. While applications are allowed to access any layer, each layer is usually restricted to invoke only the services of the layer immediately below. This restriction can be eased by allowing two or more layers in parallel, at the same level.

click to expand
Figure 7.2: The layered architecture for infrastructure modularizes it into homogeneous horizontal layers.

Example As illustrated in figure 7.3, the Internet is built on a foundation layer called the internet protocol (IP) and on top of that two widely used layers, transmission control protocol (TCP) and user datagram protocol (UDP). IP offers a service that conveys packets from one host to another with no guarantee of delivery order or reliability (analogous to sending postcards through the postal system). TCP and UDP invoke the services of IP to direct packets to a specific application running on the host. TCP also offers reliability and guaranteed ordering of delivery, achieved by detecting lost (or excessively delayed) packets and resending them. Later, the internet inter-ORB protocol (IIOP) and the real-time protocol (RTP) layers were added on top of TCP and UDP, respectively, to support distributed object-oriented applications and streaming audio and video. HTTP (hypertext transfer protocol) is an important protocol layered on TCP, the main protocol underlying the Web and easily recognized in Web addresses (http://). In the future, more layers may be added; future layers and applications are permitted to access lower layers. Applications can even invoke IP services directly, although this would be unusual.

click to expand
Figure 7.3: A simplified architecture of the Internet illustrates how new layers are added while future layers and applications can still invoke services of previous layers.

Since the layered architecture dominates the converged computing and telecommunications industries, it largely displaces the stovepipe architecture historically characteristic of both industries. There are several forces driving this shift and accompanying implications for industrial structure. The first is the effect of the Internet on the computer industry. By enabling distributed applications to communicate across platforms, it creates a new technical and commercial complementarity among them. End-users do not want to uniformly adopt a single platform to use a particular distributed application, nor do they want to participate in an application with only a subset of other users, reducing value because of network effects. Suppliers of new distributed applications don't want to limit their market to a subset of users on a given platform, or take specific measures to support different platforms. For applications to be easy to develop, deploy, and operate in an environment with heterogeneous platforms, application suppliers want to see homogeneity across those platforms. Horizontal homogeneity can potentially be achieved in a layered architecture by adding layers above the existing heterogeneous platforms, those new layers hiding the heterogeneity below. Because of path-dependent effects, this results in a hybrid stovepipe-layered architecture.

Example The virtual machine and associated environment for code portability can be viewed as a layer added to the operating system within each platform (see section 4.4.3). As illustrated in figure 7.4, this new layer adds a uniform execution model and environment for distributed software applications. It can be viewed as a homogeneous layer that sits on top of existing heterogeneous platforms (like Windows, Mac OS, and different forms of UNIX). Further, there is no reason (other than inconvenience in provisioning and application composability) not to have two or more parallel virtual machine layers supporting different groups of applications.

click to expand
Figure 7.4: The widely deployed virtual machine can create a homogeneous spanning layer for applications that hides the heterogeneity of platforms.

A layer that hides the horizontal heterogeneity of the infrastructure below and is widely deployed and available to applications is called a spanning layer. The most important spanning layer today, the internet protocol, was specifically designed to hide heterogeneous networking technologies below. The virtual machine is another example of a spanning layer, arguably not yet widespread enough to deserve this appellation. One way to view the relation between these spanning layers was illustrated earlier in the "hourglass" of figure 4.6.

A second driver for layering is the trend toward applications that integrate processing, storage, and communication and mix data, audio, and video (see section 3.1). In contrast to the stovepipe, each horizontal layer (and indeed the entire infrastructure) supports a variety of technologies, applications, and media.

Example Within the communication infrastructure, the IP layer has been extended to support data by the addition of the TCP layer and extended to support streaming audio media by the addition of an RTP layer on top of UDP (see figure 7.3). A given application can mix these media by accessing the TCP layer for data and the RTP layer for audio and video.

A third and related driver for layering is value added to infrastructure that can support the composability of different applications (see section 3.2.12), which is one of the most important roles of infrastructure. By binding different applications to different infrastructures, a stovepipe architecture is inherently constrained in its ability to support composability, but a layered architecture is not.

A fourth driver for layering is its allowance for incremental extension and elaboration while continuing to support existing applications. This reduces the barrier to entry for applications that require new infrastructure capabilities, since most of the supporting infrastructure does not need to be acquired or provisioned. Looking at it from the computer industry perspective (infrastructure first, applications later), this allows incremental investments in infrastructure for both supplier and customer.

Modularity introduces inefficiency, and layering is no exception. Compared to a monolithic stovepipe, layering tends to add overhead, no small matter in a shared infrastructure where performance and cost are often important. The processes described by Moore's law are thus an important enabler for layering (see section 2.3).

Layering fundamentally shifts the structure of industry competition. Each layer depends on the layers below (they are complementary), and an application requires the provisioning and operation of all layers upon which it depends. This creates complementary infrastructure suppliers, and a burden on infrastructure provisioning to integrate layers. Competition in the infrastructure is no longer focused on segmentation of the market for applications but rather on competition at each layer, each supplier attempting to provide capabilities at that layer for a wide range of applications. The integration of layers requires coordination among suppliers (see section 7.2), and functionality and interfaces are fairly constrained if alternative suppliers are to be accommodated.

Layering fundamentally changes the core expertise of industry players. Expertise about particular application segments no longer resides with infrastructure suppliers but primarily within application suppliers. Market forces encourage infrastructure suppliers to extend the capabilities they supply to serve all (or at least a broader range of) applications because this increases market size. This increases their needed range of expertise, and if this proves too daunting, they may narrow their focus vertically by specializing in only one or two layers. Startup companies especially face a challenge in this industry structure because of the generality and hence high development costs and wide-ranging expertise required. Thus, startup companies tend to focus either at the top (applications) or bottom (technology) of the layering architecture, where diverse solutions thrive and there are fewer constraints and less need to coordinate with others (see section 7.1.5.).

Example The physical layer of communication (transporting a stream of bits via a communication link) is a ripe area for startup companies, especially in light of the variety of media available (optical fiber, radio, free-space optical, coaxial cable, and wirepair). As long as they interface to standard solutions for the layers above, innovation is relatively unconstrained. The analogous opportunity in processing is the microprocessor, so one might expect a similar diversity of startups. In fact, microprocessor startups are rare because the instruction set is deeply intertwined with the software layers above, greatly limiting the opportunity for differentiation. The emulation or virtual machine idea is one way to address this, but this reduces performance, one of the prime means of differentiation. An interesting attempt at combining the virtual machine and the custom microprocessor concepts is Transmeta's Crusoe, a small, energy-efficient processor with a proprietary instruction set complemented by a software layer that translates standard Intel instruction sequences into Crusoe instructions.

It is useful to examine the appropriate modularity of layering in more detail. It is striking that no single organization has responsibility for consciously designing the overall layered architecture. Rather, it is determined by research and company initiatives, collaborations among companies, and standardization bodies. The result is "creative chaos" that introduces strengths and weaknesses. On the plus side, innovations are welcome from many quarters, and good ideas have a reasonable chance of affecting the industry. On the negative side, application suppliers and provisioners must deal with a lot of uncertainty, with competing approaches to consider and no clear indication as to which ones will be successful in the long term.

Example The first successful attempt at enabling cross-platform middleware as a spanning layer was the Object Management Group's common object request broker architecture (CORBA), a suite of standards to enable distributed object-oriented applications. CORBA has been successful in intraenterprise integration, where platform variety arises out of acquisitions and mergers and yet cross-platform integration is required. CORBA did not achieve similar success in interenterprise integration, where heterogeneous platforms are even more prevalent. A later approach to cross-platform integration was Java, now usually used in conjunction with CORBA in enterprise solutions. Again, interenterprise integration remains beyond reach for technical reasons. The latest attempt at global integration is Web services based on XML (extended markup language) and other Web standards (see section 7.3.7). With Web services emerging as the most likely universal spanning layer, competition in the layer immediately below heats up: Microsoft's. NET Common Language Runtime and its support for Web services compete against the Java virtual machine and its emerging support for Web services.

Historically, the approach was very different in the telecommunications industry. This arguably resulted in less change and innovation (but still a remarkable amount) but in a more reliable and stable infrastructure.

Example Until about two decades ago, each country had a monopoly national telephone service provider (often combined with the post office). In the United States this was the Bell System, with its own research, equipment, software development, and manufacturing. Suppliers and providers coordinated through standardization bodies in Geneva, predominantly the International Telecommunication Union (ITU), formerly called Comit$e Consultatif International T l phonique et T$el$egraphique (CCITT). Through these processes, the national networks and their interconnection were carefully planned top-down, and changes (such as direct distance dialing) were carefully planned, staged, and coordinated. This resulted in greater reliability and stability but also fewer competitive options or diversity of choice.

Since the networked computing infrastructure has not followed a top-down process, beyond the core idea of layering there is no overall architectural vision that guides the industry. Rather than pointing to a guiding architecture, we must resort to an analysis of the current state of the industry. An attempt at this analysis is shown in figure 7.5 (Messerschmitt 1999b). It illustrates three stovepipes of lower layers, one specific to each technology (processing, storage, and connectivity). Distributed applications (as well as nondistributed applications that combine processing and mass storage) want a homogeneous infrastructure that combines these three technologies in different ways, and thus the top layers are common to all three (see table 7.1).

Table 7.1: Description of Layers Shown in Figure 7.5

Layer

Description

Examples


Applications

A diversity of applications provide direct and specific functionality to users.

Segmented application services

Captures functionality useful to a narrower group of applications, so that those functions need not be reimplemented for each application. This layer has horizontal heterogeneity because each value-added infrastructure element is not intended to serve all applications.

Message-oriented middleware emphasizes work flow applications; information brokering serves as an intermediary between applications or users and a variety of information sources.

Integrated services layer

Provides capabilities that integrate the functions of processing, storage, and connectivity for the benefit of applications.

Directory services use stored information to capture and identify the location of various entities—essential to virtually all distributed applications.

Generic services layer

Provides services that integrate processing, storage, and connectivity in different ways.

The reliable and ordered delivery of data (connectivity); the structured storage and retrieval of data (storage); and the execution of a program in an environment including a user interface (processing and display).

Common representations

Provides abstract representations for information in different media (like processing instructions, numerical data, text, pictures, audio, and video) for purposes of processing, storage, and communication. They are deliberately separated from specific technologies and can be implemented on a variety of underlying technologies.

A virtual machine representing an abstract processing engine (even across different microprocessors and operating systems); a relational table representing the structure of stored data (even across different platforms); and a stream of bytes (eight-bit data) that are delivered reliably in order (even across different networking technologies).

Processing, storage, and connectivity

Provide the core underlying technology-dependent services.

Microprocessors, disk drives, and local-area networking.

click to expand
Figure 7.5: A layered architecture for distributed applications and the supporting infrastructure.

The essential idea behind figure 7.5 is illustrated in figure 7.6. The intermediate layers provide a common set of services and information representations widely used by applications. The goal is to allow a diversity of technologies to coexist with a diversity of applications without imposing the resulting chaos on applications—applications and technologies can evolve independently without much effect on each other. Providing a reimplementation of the common representation and services layers for each distinct technology accommodates this.

click to expand
Figure 7.6: Layering provides separation of a diversity of technologies from a diversity of applications.

Of particular importance is the spanning layer. Assuming it is not bypassed—all layers above make use of its services but do not interact directly with layers below—a well-designed spanning layer can eliminate the dependence of layers above from layers below, allowing each to evolve independently. Successfully establishing a spanning layer creates a large market for solutions (application and infrastructure) that build upon it, both above and below. The spanning layer brings to bear the positive feedback of network effects without stifling technical or competitive diversity of layers below. It illustrates the desirability of separating not only application from infrastructure but also infrastructure from infrastructure.

Example The internet protocol can be viewed as a spanning layer, although it is limited to the connectivity stovepipe. As illustrated by the hourglass of figure 4.6, the IP layer does effectively separate applications from underlying networking technologies and has become virtually ubiquitous. Suppliers creating new communication and networking technologies assume they must support an IP layer above, and application suppliers assume they can rely on IP layers below for connectivity. Applications need not be redesigned or reconfigured when a different networking technology (e.g., Ethernet local-area network, wireless local-area network, fiber-optic wide-area network, wireless wide-area network, or satellite network) is substituted. The existence of IP also creates a ready and large market for middleware products building on internet protocols.

There are, however, limitations to layering. Mentioned earlier is the added overhead necessary to implement any strong modularity, including layering. In addition, intermediate layers can hide functionality but not performance characteristics of the underlying technology from applications.

Example When the user substitutes a slow network access link for a faster one, the delay in packet delivery due to transmission time will be increased. Nothing in the intermediate layers can reverse this.

The preferred approach today to dealing with performance variations is to make applications robust and adaptive to the actual performance characteristics.[4] Applications should be able to take advantage of higher-performance infrastructure and offer the best quality they can subject to infrastructure limitations.

Example A Web browser-server combination will display requested pages with low delay when there is ample processing power and communication bandwidth. When the bandwidth is much lower (say a voiceband data modem), the browser and server should adjust by trading off functionality and resolution for added delay in a perceptually pleasing way. For example, the browser may stop displaying high-resolution graphics, or ask the server to send those graphics at a lower resolution, because the resulting diminution in delay more than compensates perceptually for lost resolution.

Many current industry standardization and commercialization efforts would support the layered model (see figure 7.7 for examples). For each standard illustrated, there are standards competing for adoption. At the common representation layer, the Java virtual machine, the relational table, and the Internet's TCP are widely adopted. At the generic services layer are shown three standards that support objectoriented programming (OOP), a standard programming technique that emphasizes and supports modularity (see section 4.3). Programs constructed according to this model consist of interacting modules called objects, and the generic services layer can support execution, storage, and communication among objects. Java supports their execution, the object-relational database management system (ORDBMS) supports the storage of objects, and IIOP allows objects to interact over the network in much the same way as they would interact within a single host.

click to expand
Figure 7.7: Examples of industry standards fitting the layered model of figure 7.5.

At the integrative services layer, CORBA attempts to identify and standardize a set of common services that integrate processing and connectivity (by incorporating Java mobile code capabilities) and processing and storage (by providing for the storage of objects). Examples include the creation or storage of objects on demand, and directories that discover and locate objects and services on the network to make distributed applications easier to develop. The Web was mentioned earlier as an application that "grew up" to become an infrastructure supporting applications that access and update information over the network. On the other hand, the Web does not support all applications (e.g., those not based on a client-server architecture). Thus, the Web-as-infrastructure falls at the segmented application services layer.

7.1.4 Core Competencies

Earlier, the historical approaches of the telecommunications and computer industries were contrasted. This contrast raises an important issue for industrial organization: Who is responsible for provisioning and operation? As described in section 6.2, there are three primary options: an independent service provider (the historical telecommunications industry model), the application or infrastructure supplier (rare), or the user (the historical computer industry model). Of course there are other options, such as splitting responsibility for provisioning and operation, application and infrastructure, or different parts of the infrastructure.

The increasing role of application service providers in the software industry, and the trend in the telecommunications industry to focus on the provisioning and operation of infrastructure and not applications (particularly in the Internet segment of their market) suggests that radical change in industry structure may be occurring. This change can be traced to at least three undercurrents. One is the ubiquity and performance of the Internet, which opens up the option of operations shifted to a service provider while making application functionality available over the wide-area network. From the user perspective it makes no difference where the operations reside, except for important factors like performance, availability, and customer service. A second undercurrent leading to organizational change is the growing specialization and diversity of applications, and the resulting importance of application suppliers' focusing their efforts on satisfying user needs and requirements. Infrastructure suppliers and service providers have not proven as effective at this as more specialized application suppliers; this suggests an organizational separation of applications and infrastructure.

These observations provide hints as to how the industrial organization may evolve in the future. Like good software architecture (see section 4.3), market forces encourage an industrial organization with weak coupling of functions and expertise across different companies and strong cohesion within companies. These properties can be interpreted in different ways, such as transaction costs and coupling of expertise. In the long term, it seems that market forces reward firms that specialize in certain responsibilities but share core competencies, because many managerial approaches, such as compensation and organizational structures and processes, are tied to these competencies. Of course, there are many other considerations of importance, such as the desire of customers for a complete product portfolio or turn-key solution, or opportunities to gain competitive advantage by synergies among complementary responsibilities.

This suggests a fresh look at the software value chain (see section 6.2), not in terms of responsibilities but in terms of core competencies. Seven core competencies can be identified (see table 7.2).

Table 7.2: Core Competencies Relating to the Software Industry

Competency

Description


Business function

An end-user organization should understand its business functions, which in most cases are not directly related to software or technology or software-based services.

User needs and requirements

Industry consultants should understand end-user needs, which increasingly requires specialized knowledge of an industry segment or specific organizational needs, in order to help organizations revamp business models, organization, and processes to take maximum advantage of software technology.

Application needs

Infrastructure suppliers should understand needs common to a wide variety of applications and application developers, and also market forces that strongly influence the success or failure of new infrastructure solutions.

Software development

Both application and infrastructure software suppliers need software development and project management skills, with technical, organizational, and management challenges. Application suppliers must also be competent at human-centered considerations such as user interfaces.

Provisioning

Constrained by the built-in flexibility and configurability of the application, the system integrator and business consultant must understand unique organizational needs and be skilled at choosing and integrating software from different firms.

Operation

Operators must be competent at the technical aspects of achieving availability and security, administrative functions like trust and access, and customer service functions such as monitoring, billing, and helpdesk.

To the extent that industrial organization is presaged by natural groupings of core competencies, the independent service provider (as embodied in the application service provider model) seems a likely outcome, because the core competencies resident there are quite distinct from those of the other roles. Telecommunications service providers should not try to be exclusive application suppliers; rather, they should focus on making their infrastructure services suitable for a wide range of applications and encourage a diversity of applications from many sources. They may exploit their core competency in operations by extending it from today's narrow range of applications (telephony, video distribution) to a wider range acquired from independent application suppliers, increasing the diversity of application service providers' offerings.

The increasing diversity and specialization of applications, and the need to consider the organizational and process elements of information technology and the interface between organization and technology, have profound implications for application software suppliers. As these core competencies differ sharply from software development, these enterprise and commerce software suppliers should look more and more to industry consultants to assist in needs and requirements definition.

For end-user organizations, focusing on core competencies would imply outsourcing application development to application software suppliers, and provisioning and operation to system integrators, consultants, and service providers. In fact, this is becoming prevalent.

7.1.5 Horizontal Heterogeneity

From a technical perspective, an infrastructure layer that is horizontally homogeneous is advantageous (see section 7.1.3). This is an important property of a spanning layer because it creates a large market for layers above and below that can evolve independently. However, as shown in figure 7.5, it is entirely appropriate for horizontal heterogeneity to creep into the layers near the top and the bottom. Near the top, this segments the market for infrastructure to specialize in narrower classes of applications. With platforms supporting applications, one size does not fit all.

Example Distributed applications benefit from an infrastructure that hides the underlying host network structure, but this is unnecessary overhead for applications executing on a single host. Work flow applications benefit from a message and queuing infrastructure, and online transaction processing applications benefit from a database management system.

Near the bottom, it is desirable to support a diversity of technologies. This diversity arises out of, as well as encourages, technological innovation.

Example Innovation in microprocessor architecture has accompanied Moore's law as an important enabler for improving performance. Sometimes these architectural innovations can be accomplished without changing the instruction set—which clearly contributes to horizontal heterogeneity—but innovations in instruction sets enable greater advances. An example is the idea of a reduced instruction set computer (RISC), which traded simplicity in the instruction set for higher instruction execution rates. Numerous technological innovations have spawned heterogeneity in storage (e.g., recordable optical disks) and communication (e.g., wireless) as well. As underlying technology changes, so does the implementation of the lower layer infrastructure.

History and legacy technologies are another reason for heterogeneity.

Example Many historical operating system platforms remain, and several remain vibrant, evolving, and attracting new applications. The Internet brought with it distributed applications and network effects that place a premium on interoperability across platforms, e.g., a Web server running on a UNIX server and a Web browser running on a Macintosh desktop platform. The Internet has also brought with it a need for composability of different applications running on different hosts. For example, MIME[5] is a standard that allows a wide variety of applications on different platforms to agree on content types and the underlying data formats and encodings (it originated to support e-mail attachments but is now used more broadly).

It was argued earlier that unconditional software portability is neither a practical nor a desirable goal (see section 4.4.3), and for the same reason evolving toward a single platform is not desirable. There is no predetermined definition of which functionalities belong in applications and which in infrastructure, but rather capabilities that many applications find valuable (or that keep appearing in multiple applications) work their way into the underlying platform. The commonality inherent in ever-expanding platforms enables greater diversity of applications and especially supports their interoperability and composability. That this can occur in multiple platforms speeds the diversity of ideas that can be explored, and offers application developers some choice among differentiated platforms. At the same time, industry must deal with the reality of heterogeneous platforms, especially for distributed applications that would otherwise become Balkanized, confined to one platform and a subset of users, with the resulting network effect diminution of value. Fortunately, the owners of individual platforms—especially those with a smaller market share and especially with the rising popularity of distributed applications—have a strong incentive to ensure that their platforms can participate in distributed applications with other platforms. But specifically what can they do about it? Adding a layer with the hope that it becomes widespread enough to be considered a spanning layer is one approach.

Example Sun's Java effort, including the Java programming language, runtime environments (including virtual machines), libraries, and interfacing standards, created such a candidate for spanning layer, abstracting from underlying platforms, software, and hardware. Other examples include the CORBA standards and Microsoft's COM and .NET.

However, given the difficulties of establishing a totally new layer that achieves a high enough market penetration to attract a significant number of applications, new layers can arise out of an existing application or infrastructure.

Example The Web migration from application to infrastructure resulted from two factors. First, it had become ubiquitous enough to attract application developers, who were not ceding much market potential by adopting it as a foundation. Second, the Web provided open APIs and documented internal interfaces that made it relatively straightforward to exploit as an infrastructure. This illustrates that an application that is more open or less proprietary is more likely to be adopted as the basis for other applications, that is, as infrastructure.

Another response to heterogeneous platforms is to design an application to be relatively easy to port to different platforms. This places special requirements on interfaces between the application and its underlying platform; if they are designed in a relatively platform-independent and open way, the porting becomes easier (but exploiting the strengths of specific platforms becomes harder).

Example The Web is a good illustration of this. As a distributed application, the Web had to deal with two primary challenges. First, if the Web was to be more than simply a vehicle for displaying static stored pages (its original purpose), it had to allow other applications to display information via the Web browser and server. For example, dynamic Web pages based on volatile information stored in a database management system required an intermediary program (encompassing what is usually called the application logic) that requested the appropriate information from the database and displayed it in the proper formats in the browser. For this purpose, an open and relatively platform-independent API called the common gateway interchange (CGI) was standardized. The second challenge was the interoperability between browser and server running on different platforms. Fortunately, this problem was already partially solved by IP, which provided a communication spanning layer that allowed one host to communicate data to another using a standard packet representation, and a TCP that provided reliable and ordered transport of a stream of bytes. The Web simply had to standardize an application-layer transfer protocol (HTTP). These open standards made the Web relatively platformindependent, although different versions of the browser and server still had to be developed for the different platforms. Later, these open interfaces were an important factor in the evolution of the Web into infrastructure. For example, other applications could make use of HTTP to compose the entire browser or server into an application, or could even use HTTP independently (e.g., Groove, an application that uses HTTP as a foundation to share files and other information among collaborating users).[6] The latest generation of standardization focuses on Web services, and one of the core protocols in this space (SOAP: simple object access protocol) usually operates on top of HTTP (see section 7.3.7).

Another approach is to embrace horizontal heterogeneity but arrange the infrastructure so that interoperability and even composability are achieved across different platforms by appropriate industry standards or by platform-specific translators.

Example Different platforms for both servers and desktop computers tend to use different protocols and representations for file storage and print services. Users, on the other hand, would like uniform access to their files across all platforms (including Mac OS, the different forms of UNIX, and Windows), for example, to access files or print services on a UNIX server from a desktop (Linux or Mac OS or Windows) platform. Various open source solutions (e.g., Samba) and commercial solutions (e.g., Netware, NFS, Appletalk, Banyan Vines, and Decnet) provide this capability. See section 7.3 for additional examples in the context of software components and Web services.

Instead of tackling protocols (which specify how information gets from one platform to another) other industry standards focus on providing a common, flexible representation for information (without addressing how that information gets from one platform or application to another). These are complementary, since information must be transferred and must be understandable once it has been transferred.

Example XML is a flexible and extensible language for describing documents. It allows standardized tags that identify specific types of information to be identified. A particular industry can standardize these tags for its own context, and this allows different firms to exchange business documents and then extract the desired information from those documents automatically. For example, one industry might define an XML-based standard for describing purchase orders, and then each company can implement translators to and from this common representation for purchase orders within its (otherwise incompatible) internal systems. Unlike HTML, XML separates the formatting of a document from its content. Thus, workers can display XML purchase orders in presentations specific to their internal needs or practices.

A fourth approach to dealing with heterogeneous platforms is to add a service provider who either acts as an intermediary or centralizes the application.

Example Some companies have created a common intermediary exchange as a place to procure products and services from a set of suppliers. Sometimes this is done on an industry basis, as in the automotive industry (Covisint 2000). Considering the latter case, the exchange does not directly overcome platform and application incompatibilities among the participating organizations, but it does make the challenges considerably more manageable. To see this, suppose there are n distinct firms involved in the exchange. The intermediary has to deal with these n firms separately. Without the intermediary, each firm would have to work out similar interoperability issues with each of the n 1 other firms, or in total there would be n . (n 1). such relationships to manage, a much greater total burden. Interestingly, intermediaries tend to compete as well, leading to k intermediaries and thus k n relationships. For k = 1, the single intermediary can gain substantial market control, and the desire of the coordinated firms to retain agility tends to encourage formation of a competing intermediary. As k approaches n, the benefit of having intermediaries disappears. Market forces thus tend to keep the number of competing intermediaries low.

Distinct types of infrastructure can be identified in the layered architecture of figure 7.5. In classifying a particular type of infrastructure, it is important at minimum to ask two basic questions. First, does this infrastructure support a broad range of applications (in the extreme, all applications), or is it specialized to one segment of the application market? Second, is the infrastructure technology-specific, or does it not matter what underlying technologies are used? Neither of these distinctions is black-and-white; there are many nuances. The answers can also change over time because new infrastructure typically must start with only one or a few applications and then grow to more universal acceptance later. Table 7.3 gives examples of infrastructure categorized by application and technology dependence. Each category in the table suggests a distinct business model. As a result, infrastructure suppliers tend to specialize in one (with a couple of notable exceptions): infrastructure suppliers at the lower layers focus on technology-dependent infrastructure, and those at the higher layers on application-dependent infrastructure. In the extreme of application dependence, some infrastructure may support a specific application but offer open APIs in the hope that other applications come later. In the extreme of technology dependence, an infrastructure may be embedded software bundled with hardware and sold as equipment or an appliance. In this case, the software is viewed as a cost of development, with no expectation of selling it independently.

Table 7.3: Examples of infrastructure Defined by Application and Technology Dependence

Not Application-Dependent

Particular to One Application Market Segment


Not technology-dependent

The Internet TCP transport layer is widely used by applications desiring reliable, ordered delivery of data. The specific networking technologies present are hidden from TCP and layers above by the Internet IP spanning layer.

Message-oriented middleware supports work flow applications, and the Web supports information access and presentation. Each emphasizes distributed applications across heterogeneous platforms.

Particular to one technology platform

Operating systems are specific to a computer platform, although some (like Linux, Solaris, Windows NT, and Windows CE) have been ported to several. Each is designed to support a wide variety of applications on that platform.

Information appliances typically support a single application, and build that application on a single infrastructure technology platform.

Application- and technology-independent infrastructure clearly has the greatest market potential as measured by adoptions or unit sales. However, this type offers little opportunity for differentiation as to functions or features. To actually achieve universality, it must be highly standardized and hence commoditized. If it is well differentiated from other options, it is likely struggling for acceptance. Thus, this type of infrastructure probably represents the least opportunity for profit but is nevertheless quite important and beneficial. The current trend is therefore to use community-based development methodologies to create and maintain this type of software (as illustrated by the Samba example earlier; also see section 4.2.4), although not exclusively. Its universal appeal and wide use lend themselves to community-based development. This is leading to new types of software licensing approaches that mix the benefits of community-based development such as open source with commercial considerations such as deriving revenue and profit (see chapter 8).

Example Sun Microsystems has been a leading proponent of community-based development of infrastructure, examples being its Java middleware layer for portable execution (see section 4.4.3) and Jini, a platform for plug-and-play interoperability among information appliances (Sun microsystems 1999b). Other firms, such as Apple Computer (Mac OS X) and Netscape (Mozilla browser) have followed this approach. Several firms (e.g., IBM and Hewlett-Packard) have chosen open source Linux as an operating system platform.

At the other extreme, application- and technology-dependent infrastructure is characteristic of the stovepipe architecture (see section 7.1.3), and for the reasons discussed earlier is disappearing because of shrinking market opportunity. Most commercial infrastructure differentiates itself in the application or technology space, but not both.

7.1.6 Competition and Architecture

This section has enumerated some global architectural alternatives and their relation to industry structure. Architecture, because it defines the boundaries of competition and complementarity, is an important strategic issue for software suppliers (see section 6.1), and successfully defining and promulgating an architectural model is an important element of long-term success (Ferguson and Morris 1994). In contrast, suppliers who provide point solutions or who must plug solutions into an architecture defined elsewhere lose some control over their own destiny and have less strategic maneuvering room. Doubtless because of its significance, architectural control has also been a source of industry controversy and even government antitrust complaints (see chapter 8).

Within a given architecture, competition exists at the module level, and where a supply chain is created by hierarchical decomposition, at the submodule level as well. On the other hand, interacting modules are complementary. Suppliers naturally try to avoid direct competition in modules, particularly because high creation costs and low replication and distribution costs for software make competitive pricing of substitutes problematic (see chapter 9). Generally, open standardized interfaces (see section 7.2.3) make head-on competition more difficult to avoid, and for this reason industry standards are increasingly defined with a goal of enabling competitive suppliers to differentiate themselves with custom features or extensions while maintaining interoperability. In contrast, if suppliers choose not to offer complementary modules themselves, they encourage competitive options for those modules so that customers have a wider range of options with attractive prices and features and so that the overall system pricing is more attractive.

The architectural options and evolution discussed here have considerable implications for competition. Applications and infrastructure are complementary, and thus suppliers of each generally encourage competitive options in the other. While the expansion of infrastructure capabilities is a way to improve economic efficiency through sharing (see chapter 9) and to improve performance and quality, it also changes the competitive landscape by shrinking some application markets or at least introducing new competition.

No architectural issue has had more effect than the evolution from stovepipe toward layering architecture. This shifts the landscape from horizontal market segmentation, with an inclination toward vertical integration within each segment, to a vertical segmentation of functions where multiple complementary suppliers must cooperate to realize a full system solution.

The spanning layer is a key element of a layered architecture. Because it allows greater independence of evolution in the layers below and above, it is another form of common intermediary that makes practical a larger diversity of interoperable options below and above. Even if not initially defined with open interfaces, the ubiquity of a spanning layer implies that its interfaces below and above become de facto standards. In order for the entire infrastructure to evolve, a spanning layer and its interfaces must also evolve over time, insofar as possible through extensions rather than changes. While the spanning layer offers compelling advantages, commercial control of such a layer raises complaints of undue influence over other layers and overall system evolution. One response is government-imposed limits on the business practices of the owner of a spanning layer (see chapter 8). Another is to abandon the spanning layer in a way that preserves most advantages, such as horizontal heterogeneity within a layer, while maintaining interoperability or portability (see sections 7.1.5 and 7.3). Another is to promulgate a spanning layer in the public domain (through community-based development or similar methodologies, as in the Samba example).

[1]Within the limit of supporting a very specific application, this strategy is more accurately described as relying less on infrastructure and instead building more capability into the application. The telecommunications industry definitely follows an infrastructure strategy at the level of right-of-way and facilities, which are shared over voice, data, and video networks.

[2]In contrast, the telephone network uses a different transport mechanism called circuit switching that does guarantee delay but also fixes the bit rate and does not achieve statistical multiplexing (see chapter 9).

[3]The telecommunications industry developed new networking technologies such as asynchronous transfer mode and frame relay, whereas the Internet arose from the interconnection and interoperability of local-area networking technologies such as Ethernet. The Internet eventually dominated because of the positive feedback of direct network effects and because of decades of government investment in experimentation and refinement.

[4]When high-performance infrastructure exceeding the needs of most applications becomes widely deployed, this issue largely goes away. An alternative approach that may become more prevalent in the future is for applications to request and be allocated resources that guarantee certain performance characteristics.

[5]MIME stands for multipurpose Internet mail extensions (Internet Engineering Task Force draft standard/best current practice RFCs 2045-2049).

[6]The advantage of HTTP in this case is that it penetrates corporate firewalls because so many users want to browse the Web. Groove thus shares files among users by translating them into HTTP.




Software Ecosystems(c) Understanding an Indispensable Technology and Industry
Software Ecosystem: Understanding an Indispensable Technology and Industry
ISBN: 0262633310
EAN: 2147483647
Year: 2005
Pages: 145

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net