Chapter 4: Chemical-Based Computing and Problems of High Computational Complexity - The Reaction-Diffusion Paradigm


Nicholas G. Rambidi

4.1 Several Initial Remarks on von Neumann versus non–von Neumann Computing

A vast variety of engineering projects—keystones for industry—were developed in the 1940s and 1950s and initiated the development of digital von Neumann computing devices. The mathematical and computational basis of these projects could be reduced to problems of rather low (polynomial) computational complexity. The character of the computational complexity of problems inherent in practical projects was of decisive importance in choosing which paradigm would be used in elaborating the computing techniques under development.

In the early 1940s, nearly simultaneously with the advent of the von Neumann paradigm, McCulloch and Pitts (1943) offered a radically different approach to designing information processing devices. According to them, a computational system can be designed to be, in a sense, analogous to the human brain. Simple processors (neurons) are constituent parts of the system, with each being connected to all other processors in some definite manner. Computing capabilities of the system are defined by the predetermined complex structure of the system (that is, by the character of neuron connection), not by the stored program. Problems are solved by the system with a very high degree of parallelism. At the same time, the character of the dynamics inherent in the system defines the storage of information and the information processing capabilities of the system.

McCulloch and Pitts (1943) used two fundamental principles of information processing by biological entities in deriving the basis of this neural net approach. They are:

  • An "all or none" mode of a single neuron activity—that is, a nonlinear dynamic mechanism

  • A very high degree of parallelism of neural connections in a neural net

Computer designers have been repeatedly rederiving this paradigm during the last several decades. Nonetheless, it is only recently that the neural net approach has been turned into a practical tool for designing methods of information processing for problems of high computational complexity.

In the late 1980s, Michael Arbib (1989, 1994) suggested an expansion of contemporary concepts of computation to further mimic the style of the brain. In his article, "The brain as a metaphor for sixth generation computing" (Arbib 1994), he argued: "This style depends on constant interaction of concurrently active systems, many of which express their activity in the interplay of spatio-temporal patterns in manifold layers of neurons" (107).

The remarkable features of Arbib's approach consist of several important points:

  1. The brain is an action-oriented computer. Central to the action-oriented view is that the system (human, animal or computer-robot) must be able to correlate action and the results of its interaction in a such a way as to build up an internal "model" of the world.

  2. The brain has a hierarchical multilevel organization. The extremely important point is that no one-level model is able to describe brain functions.

  3. The brain is not a serial information processing system.

Arbib's ideas were the basis behind the renewed interest in neural net devices—more precisely, the interest in complex versions of semiconductor computer architecture that have enabled researchers to greatly increase the level of computer parallelism. Arbib argued: "It is only in the last few years that there has been a dramatic reawakening of interest in the technological implications of neural computing—in no small part because the developments of VLSI and computer networking had led computer scientists to explore how problem solving can be distributed across a network of interacting, concurrently acting processors" (Arbib, 1989, 186).

The basic starting point for the following discussion is the suggestion that consistent and straightforward implementation of basic biological information processing principles into computing would lead to information processing devices fundamentally different from von Neumann ones in their architectures and dynamics and based on new technological principles. These devices would not compete with future digital semiconductor techniques but rather supplement them, greatly increasing the capabilities of the "information industry".

The problems discussed in this chapter have a biological background. There are, however, many systems in chemistry, physics, and biology with similar behavioral characteristics. This includes tissues of living organisms, assemblies of primitive microorganisms, biological membranes and other biological assemblies, sets of coupled biochemical and chemical reactions with nonlinear kinetics, and so on. The terms biomolecular system and biomolecular computing will be used below for all these entities when general problems are discussed.

This chapter was designed to discuss in detail one of the attempts to elaborate non–von Neumann information processing means—that is, a set of pseudobiological paradigms; more precisely, to discuss the important subset that could be called the reaction-diffusion paradigm.

Two basic points are of great importance for the following discussion. They are:

  • A detailed understanding of the notion of complexity and its significance in information processing

  • The nature of the information processing in biological entities—that is, general principles of their data storage and transformation




Molecular Computing
Molecular Computing
ISBN: 0262693313
EAN: 2147483647
Year: 2003
Pages: 94

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net