Within the limit of supporting a very specific application, this strategy is more accurately described as relying less on infrastructure and instead building more capability into the application. The telecommunications industry definitely follows an infrastructure strategy at the level of right-of-way and facilities, which are shared over voice, data, and video networks.
In contrast, the telephone network uses a different transport mechanism called circuit switching that does guarantee delay but also fixes the bit rate and does not achieve statistical multiplexing (see chapter 9).
The telecommunications industry developed new networking technologies such as asynchronous transfer mode and frame relay, whereas the Internet arose from the interconnection and interoperability of local-area networking technologies such as Ethernet. The Internet eventually dominated because of the positive feedback of direct network effects and because of decades of government investment in experimentation and refinement.
When high-performance infrastructure exceeding the needs of most applications becomes widely deployed, this issue largely goes away. An alternative approach that may become more prevalent in the future is for applications to request and be allocated resources that guarantee certain performance characteristics.
MIME stands for multipurpose Internet mail extensions (Internet Engineering Task Force draft standard/best current practice RFCs 2045-2049).
The advantage of HTTP in this case is that it penetrates corporate firewalls because so many users want to browse the Web. Groove thus shares files among users by translating them into HTTP.
See <http://www.mp3licensing.com/> for a description of license terms and conditions.
Preserving proper system functioning with the replacement or upgrade of a component is difficult to achieve, and this goal significantly restricts the opportunities for evolving both the system and a component. There are a number of ways to address this. One is to constrain components to preserve existing functionality even as they add new capabilities. An even more flexible approach is to allow different versions or variations of a component to be installed side-by-side, so that existing interactions can focus on the old version or variation even as future added system capabilities can exploit the added capabilities of the new one.
See Ueda (2001) for more discussion of these methodologies of synthesis and development and how they are related. Similar ideas arise in the natural sciences, although in the context of analysis and modeling rather than synthesis. The top-down approach to modeling is reductionism (the emphasis is on breaking the system into smaller pieces and understanding them individually), and the bottom-up approach is emergence (the emphasis is on explaining complex behaviors as emerging from composition of elementary entities and interactions).
While superficially similar to the module integration that a supplier would traditionally do, it is substantively different in that the component's implementation cannot be modified, except perhaps by its original supplier in response to problems or limitations encountered during assembly by its customers. Rather, the assembler must restrict itself to choosing builtin configuration options.
As in any supplier-customer relationship in software, defects discovered during component assembly may be repaired in maintenance releases, but new needs or requirements will have to wait for a new version (see section 5.1.2). Problems arising during the composition of two components obtained independently are an interesting intermediate case. Are these defects, or mismatches with evolving requirements?
Technically, it is essential to carefully distinguish those modules that a programmer conceived (embodied in source code) from those created dynamically at execution time (embodied as executing native code). The former are called classes and the latter objects. Each class must capture various configuration options as well as mechanisms to dynamically create objects. This distinction is equally relevant to components.
By requiring each level (except the top) to meet a duality criterion—that entities exist both in a composition at the higher level and in independent form—nine integrative levels constituting the human condition can be identified from fundamental particles through nation-states. This model displays an interesting parallel to the infrastructure layers in software (see section 7.1.3).
Although component frameworks superficially look like layers as described in section 7.2, the situation is more complex because component frameworks (unlike traditional layers) actively call components layered above them. Component frameworks are a recursive generalization of the idea of separating applications from infrastructure.
This inherently high cost plus the network communication overhead is one reason for the continued factoring of Web services into internal components, since components can be efficient at a much finer granularity.
These environments were initially called E-Speak (Hewlett-Packard), Web Services (IBM and Microsoft), and Dynamic Services (Oracle). The generic term Web services has now been adopted by all suppliers.