1.3 Trade-Off Principle


1.3 Trade-Off Principle

The comparative limits of programmable and nonprogrammable architectures can be stated in terms of a trade-off principle: Programmability, efficiency, and evolutionary adaptability are incompatible. A system, to achieve high programmability, must trade off efficiency and evolvability.

A computing system is programmable if the initial state and a chosen set of formally defined state transition rules can be explicitly invoked. The programmer communicates the intended relations among the system states to the system, which in turn interprets the rules in rigid adherence to a finite user manual. If the programmability is bound into the material structure of the system, we will refer to it as structural. Material physical systems generally have self-organizing dynamics, hence a will of their own that is incompatible with prescriptive programmability. The computer designer must quench these self-organizing aspects in order to achieve a physical realization of a formal system. Information processing systems, however, do not need to be programmable; functionality can be molded through adaptive procedures.

We can phrase the programmability-efficiency trade-off in terms of interactions. To be as generous as possible, let us make the assumption that elementary particles can serve as active components in a computing system and the system contains n such particles. The potential function of the system can call on as many as n2 interactions. If the system is structurally programmable, the input-output behavior of components should remain the same as more components are added. This is only possible if the components have a fixed number of possible inputs. Thus the number of allowable interactions scales as Cn, where C is a constant. The fraction of interactions available for problem solving falls off as C/n as the number of components increases. If the system is run in a serial mode, therefore in an effectively programmable mode, the falloff is even faster (i.e., as K/n2, where K is the number of components that can be active at any given time). If quantum features are pertinent to the system's problem solving, interference effects among the possible states of the particles must also be considered, further increasing the disparity between the potential complexity of natural systems and systems configured to be structurally programmable. The assumption that single particles could act in accordance with a finite user manual is of course quite unreasonable. As the number of particles per component decreases, it becomes increasingly likely that the system will self-organize in a way that escapes a simple user manual description (Conrad 1995).

The trade-off principle is intimately connected to the compression issues considered in section 1.2. The salient point is that all structurally programmable architectures must have a highly compressible description in order to conform to formal rules specified in a simple user manual. Constructing a formal component calls for a large number of particles, because this requires quenching of self-organizing characteristics that deviate from the user manual. A large number of such formal and hence low-complexity components is needed to build a system with complex behavior. Efficiency in terms of the necessary number of particles will therefore be low. In short, to make a heavyweight architecture out of lightweight components, the system must be large.

The conflict between structural programmability and evolutionary adaptability can also be understood in terms of compression. In a program that is a highly compressed description of the system's behavior, a change in any single bit will, in general, have radical effects on the behavior of the modified program. The program ordinarily describes an input-output table that is much larger than the program. Any bit modification in the program will, in general, alter many bits in the input-output table. Of course, the uncompressed input-output table can always be changed gradually (bit by bit). But it is only possible to act on this table through modifications of the program—hence the gradualism requirement for evolutionary adaptability cannot, in general, be satisfied. If biological systems were amenable to a highly compressed description, they would a fortiori be unsuitable for evolutionary adaptation.

The trade-off principle does not assert that structural programmability absolutely precludes evolutionary adaptability. Biological systems in nature are clearly highly evolvable. In principle, it should be possible to use a structurally programmable machine to simulate the structure-function plasticity that allows for this evolvability. As long as mutations are restricted to the virtual level, rather than to the program as encoded in the state of the base machine, it would be possible to duplicate the requisite evolvability. But this comes at a computational cost; the computational work required to simulate plastic structure-function relations puts a severe practical limit on the degree of evolvability that can be retained. In effect, the simulation program is a decompression of some highly compressed program that could do the same job as the simulated system. The decompression, if appropriately introduced, reduces the fragility of the program.

The decompression has an equivalent in the interaction picture. Redundancy in the number of components and interactions among them serves to buffer the effect of mutation on features of the system critical for function (Conrad 1979). This is not an entirely general fact; it is restricted to a subclass of systems with self-organizing dynamics. Protein folding, in particular, fits this picture. As the length of the amino acid chain increases or as more amino acids with similar properties are available for substitutions, the chance that a mutation will be acceptable increases. Without selforganization, the introduction of redundancy would only yield fault tolerance, not the topological distortability necessary for transformation of function (Conrad 1983).

The structure-function relations that enable high efficiency and high evolvability require context-sensitive components. This sensitivity of the components' behavior to their environment is in sharp contrast to the precisely defined and therefore contextfree components of structurally programmable systems. Nevertheless, networks of context-free components run in a parallel mode can also exhibit self-organization, as in the case of artificial neural networks. The self-organization, however, causes a loss of effective programmability. With the main advantage of rigidly defined components lost, there is no reason to restrict the architecture of the network to context-free components. Instead, context-sensitive components that open the path to high efficiency and high evolvability can be employed.

The trade-off principle suggests that there are two sharply different modes of computing: the high programmability mode versus the high efficiency, high adaptability mode. Biological systems, because they are the products of evolution, must operate in the latter. The remainder of this chapter will focus on initial concrete steps in the direction of artificial systems that operate in the biological mode.




Molecular Computing
Molecular Computing
ISBN: 0262693313
EAN: 2147483647
Year: 2003
Pages: 94

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net