Chapter 4


  1. There are published statistics on the failure rate, but they are not cited here because they mask major gradations of failure (ranging from time and budget overruns to outright cancellation to deployment followed by abandonment) and wide variation in the causes of failure. "Failed" projects can also yield useful outcomes like experience or fragments of code later used in other projects.

  2. Extreme programming (Beck 1999) was probably the first widely acknowledged agile process. At its heart, extreme programming requests a close feedback loop with the customer of the software being developed, development in rapid cycles that maintain a working system throughout, the hand-in-hand development of increments and corresponding tests, and work in pairs to minimize mistakes caused by oversight.

  3. This is essentially the same observation that underlies Metcalf's law of network effects (see chapter 9).

  4. Where subsystem composition is guided by architecture, those system properties that were successfully considered by the architect are achieved by construction rather than by observing randomly emerging composition properties. For example, a security architecture may put reliable trust classifications in place that prevent critical subsystems from relying on arbitrary other subsystems. Otherwise, following this example, the security of an overall system often is only as strong as its weakest link.

  5. Fully effective encapsulation mandates that implementation details not be observable from the outside. While this is a desirable goal, in the extreme it is unattainable. Simply by running tests against a module, or in the course of debugging during a module integration phase, a module will reveal results that allow a programmer to conclude properties of the module's implementation not stated in its module's abstraction.

  6. Interfaces are the flip side (technically the dual) of an architect's global view of system properties. An interface determines the range of possible interactions between two modules interacting through that interface and thus narrows the viewpoint to strictly local properties. Architecture balances the views of local interaction and global properties by establishing module boundaries and regulating interaction across those boundaries through specified interfaces.

  7. Sometimes emergence is used to denote unexpected or unwelcome properties that arise from composition, especially in large-scale systems where very large numbers of modules are composed. Here we use the term to emphasize desired more than unexpected or unwanted behaviors.

  8. Bits cannot be moved on their own. What is actually moved are photons or electrons that encode the values of bits.

  9. This is a simplification of a real facsimile machine, which will attempt to negotiate with the far-end facsimile machine, and failing that will give up.

  10. There are infrastructure extensions to the Mac OS that allow it to run Windows programs, such as Connectix's Virtual PC for Mac or Lismore's Blue Label PowerEmulator. Assume this is not present, so that the portability issue is more interesting. Alternatively, assume this is not sufficient, as it presents suboptimal integration of user interface behavior, look, and feel.

  11. People can and do write object code. This used to be much more common, before Moore's law reduced the critical need for performance. However, object code (or a representation very close called assembly language) is still written in performance-critical situations, like for example in the kernel of an operating system or in signal processing.

  12. C has long held primacy for system programming tasks (like operating systems). C++ is an extension of C using object-oriented programming, a methodology that supports modularity by decomposing a program into interacting modules called objects. Java was developed more recently, primarily to support mobile code. New languages are arising all the time. For example, C# is a new language (based on C++) designed for the Microsoft .NET initiative.

  13. In some cases, source code as well as object code may be distributed (see section 4.2.4).

  14. Interpretation introduces runtime overhead that reduces performance, whereas the onetime compilation before distribution is not of concern. Languages that are normally interpreted include built-in operations that perform complex tasks in a single step, allowing an interpreter to efficiently map these operations to an efficient implementation. Languages designed to be compiled avoid such built-in complex operations and instead assume that they could be programmed using only primitive built-in operations and operations already programmed.

  15. By monitoring the performance, the online optimizer can dynamically optimize critical parts of the program. Based on usage profiling, an online optimizer can recompile critical parts of the software using optimization techniques that would be prohibitively expensive in terms of time and memory requirements when applied to all the software. Since such a process can draw on actually observed system behavior at use time, interpreters combined with online optimizing compilation technology can exceed the performance achieved by traditional (ahead-of-time) compilation.

  16. There is nothing special about intermediate object code: one machine's native code can be another machine's intermediate object code. For instance, Digital developed a Pentium virtual machine called FX!32 (White Book 2002) that ran on Alpha processors. FX!32 used a combination of interpretation, just-in-time compilation, and profile-based online optimization to achieve impressive performance. At the time, several Windows applications, compiled to Pentium object code, ran faster on top of FX!32 on top of Alpha than on their native Pentium platforms.

  17. A generalization of this checking approach is now attracting attention: proof-carrying code. The idea is to add enough auxiliary information to an object code so that a receiving platform can check that the code meets certain requirements. Such checking is, by construction, much cheaper than constructing the original proof: the auxiliary information guides the checker in finding a proof. If the checker finds a proof, then the validity of the proof rests only on the correctness of the checker itself, not on the trustworthiness of either the supplied code or the supplied auxiliary information.

  18. Of course, what we would really like is a trustworthy system. However, within the realm of cost-effective commercial systems, security systems are never foolproof. Thus, it is better to admit that a system may be trusted out of necessity but is never completely trustworthy. This distinction becomes especially important in rights management (see chapter 8).

  19. A build system takes care of maintaining a graph of configurations (of varying release status), including all information required to build the actual deliverables as needed. Industrial-strength build systems tend to apply extensive consistency checks, including automated runs of test suites, on every check-in of new code.

  20. Some competing networking technologies like Frame Relay and Asynchronous Transfer Mode required that a new LAN infrastructure be deployed. By the time they rolled out, the Internet had become strongly established and was benefiting from the positive feedback of direct network effects.

  21. In fairness to those designers, providing more addresses requires more address bits in the header of each packet, reducing network efficiency. Four billion sounds sufficient in a world with a population of about six billion, but in fact the administrative realities of allocating blocks of addresses to organizations result in many unused addresses, and many Internet addresses are associated with unattended devices rather than people. The telephone network is experiencing a similar problem with block allocation of area codes to areas of widely varying populations.

  22. Not too much significance should be attached to this razor blade analogy because of two major differences. Razor use does not experience significant network effects, and both razors and blades are sold to the same consumers. In fact, we have not been able to identify a close analogy to client-server complementarity in the material world.

  23. There are difficult technical issues to overcome in peer-to-peer as well. For example, many hosts have dynamically assigned network addresses to aid in the administration of address resources. This is not a problem with clients, which normally initiate requests, but it is an obstacle for peers and servers, which must be contacted. Static address assignment is feasible for servers but likely not for (a large number of) peers.

  24. The data generated by a program that summarizes its past execution and is necessary for its future execution is called its state. A mobile agent thus embodies both code and state.

  25. The network cloud already has considerable processing and storage resources. However, these resources support embedded software that operates the network, and they are not made available to applications directly.

  26. Communication links require right-of-way, so they are a natural target for the utility model. It is not practical for operators, even large corporations, to obtain their own right-of-way for communication facilities. The sharing of these links with the provisioning of switches is a natural extension. Thus, it is to be expected that networking would adopt the utility model first.




Software Ecosystems(c) Understanding an Indispensable Technology and Industry
Software Ecosystem: Understanding an Indispensable Technology and Industry
ISBN: 0262633310
EAN: 2147483647
Year: 2005
Pages: 145

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net