Introduction - What Is Molecular Computing?


The history of technology is also the history of ideas. Sometimes ideas that have become stale and rigid continue to hold sway and hamper scientific and technological innovation. The difficulty is that once a certain set of concepts and images are set into place, the influence of these concepts may take on a life and power of their own—far beyond what its creators intended—guiding and shaping further development downstream until it becomes inconceivable to think of a certain technology as having taken any other track.

The standard paradigm of digital computing with its division into "hardware" and "software" is a case in point. The division has been recruited to explain (among other things) the relationship between brain and mind, where the human mind is nothing more than the software that happens to be running on the "wetware" that is the human brain. In fact, the strong version of the Church-Turing thesis would state that all processes in nature are digitally duplicable, within reason, using a set of rules of the Turing form. Turing restated this with his famous Turing test, wherein he assumed it would be possible to program a digital computer in such a way that its responses would be indistinguishable from those of a human, claiming that such a machine had just as much right to be called intelligent as the human did. Here, the specifically biological properties of the brain would not be essential in any way to its information processing capabilities.

The authors of this book take a different view. We believe that the great power of the human brain and other similar biological organisms—feats of recognition, immense parallel processing abilities—are due to the fact that we do not operate according to the digital information/hardware split, nor are we Turing machines. Although systems in nature may be describable by mathematical maps that are Turing computable, there is no claim that the amount of time and space required satisfies real-time constraints imposed by the environment, or that they are even computable given all the time, space, and energy available in the universe. Nor is there evidence that actual biological intelligences work this way. Humans and other biological organisms do not have little zeros and ones encoded in our systems, nor do we operate according to digital algorithmic principles. Enzymes in our bodies perform marvelous feats of recognition—being able to recognize one particular protein out of the myriad we have running around and break it in half—yet none of this is done according to any digital program encoded somewhere or stored tables of discrete rules. We (and other biological organisms) work according to what we call in this book molecular computing. As Michael and Deborah Conrad have argued (Conrad and Conrad 1997), biological organisms process information in a matter that exploits physical-dynamical features (including quantum effects) that are computationally costly in terms of the number of digital switching operations required—if able to be done at all. It is the physical characteristics of material systems—whether they be relatively simple chemical systems or material in biological cells—that allow highly complex information processing to occur. It is the feats of this paradigm that we wish to explore.

Molecular computers are information processing systems in which individual molecules play a crucial functional role. Artificial information processing systems fabricated from molecular materials might emulate biology or follow de novo architectural principles. Much work has already been done on shrinking the level of circuits and switches found in conventional silicon architecture down to the atomic level, using molecular wires, switches, and the Coulomb effect and "nano-islands" to allow the encoding of a bit in the presence or absence of one electron. With conductive polymers and the discovery of molecules such as the rotaxanes—which have bistable states and can theoretically encode information—much progress has already been completed and will undoubtedly lead to new "conventional computing" devices. But we should remind ourselves again that this is not how biological organisms process information.

The two biggest scientific developments in the second half of the twentieth century have been computer science and molecular biology. The existing parallels between these fields must have stimulated many investigators with wondering whether a deeper link should in fact be made, using analogies from one in an attempt to provoke advances in the other. One example would be the replication and transcription of DNA, which forcibly reminds one of various tape-writing and tape-reading operations in a digital computer. This analogy can be misleading: The transcription of DNA produces a protein, which in its unfolded form is a long line of amino acids bound to a polypeptide backbone. It then, under the influence of the surrounding cell medium and a nonzero temperature, somehow folds up to produce a three-dimensional protein held together by various weak interactions (van der Waals, proton hopping, etc.) The "information" can be argued to be in various different places: somewhat in the original chain, in the folding-up process, and in the final structure of the protein. In any case, the "encoding" of the information relies heavily on actual physical molecules, how they interact with each other and their environment, which includes a heat bath and the molecules of the surrounding medium. By comparison, the tape-reading/tape-writing aspect of computer programs is more or less irrelevant—a program is nothing more than pure information and could conceivably be encoded into the machine in various different ways. In fact, that conventional computers up to now have been able to ignore the effects of thermal noise and the atomic nature of the material is what has allowed the existence of software in the first place.

The present interest in biocomputing is due to many factors. The first impetus arises from electronics. Limits are extended year after year, yet at some point the size, speed, and power dissipation of switches based on silicon or other conventional materials will run into the deadlocks set by the basic laws of physics. Already the "leakiness" of quantum effects between nano-sized wires that are nanometers apart has sparked interest in somehow harnessing the power (quantum cellular automata, etc.) of quantum mechanical devices. The second impetus is that although conventional computer science has been extremely successful, a number of critical problems in information processing have persisted stubbornly beyond reach: pattern recognition, learning, and parallelism being three examples of where biological systems still remain far advanced beyond their silicon mimics.

Most frustrating is the fact that although integrated circuit technology has managed to squeeze ever more lines onto a chip, all but a small portion of the silicon is unused at any given time. Paradoxically, packing twice as many silicon switches into a given volume—increasing the number of components—decreases the fraction of actual active material. Because of this, it is doubtful that future computers will be able to do any better than present-day computers when it comes to problems of high complexity.

To harness the dormant computing power in a system it is, in general, necessary to give up conventional programmability. Conrad (1990) has expanded this notion into the trade-off principle:

A computing system cannot have all of the following three properties: structural programmability, high computational efficiency, and high evolutionary adaptability. Structural programmability and high computational efficiency are always mutually exclusive. Structural programmability and evolutionary adaptability are mutually exclusive in the region of maximum effective computational efficiency (which is always less than or equal to the computational efficiency).

Programmability versus evolvability. In order for a system to evolve toward higher efficiency, it is necessary for it to be able to change and adapt to circumstances. It is everyone's experience that a single change in a computer program usually leads to major changes in the execution sequence, and rarely with a resulting program that works! Redundancies can be introduced to confer fault tolerance, but in that case, any changes in function are prevented. A computer program can be considered a fragile system—small changes in it "break" it, and thus it cannot be considered a useable system for evolving. Variation and selection are efficient as a method of adaptation only if a potentially useful alternate can be produced through a single structural alteration. As any programmer knows, to his chagrin, something as simple as a bit swap or a misplaced character regularly renders a program completely unusable—in fact, often unable to be compiled. By comparison, in biological systems, enzymes usually have their shapes and functions only slightly altered by a point mutation. "Programming" occurs at the level of the amino acid sequence, but it can be considered as at least one step removed from the "running program" aspect of the enzyme, because shape and function emerge from this sequence through a continuous dynamical folding process. It is this intermediate step, the continuous dynamical process of protein folding, that allows the likelihood that single mutations will still lead to functionally acceptable forms of the enzyme, and this is critically important for maintaining a nonnegligible rate of evolution.

Programmability versus efficiency. Individual components usually are not optimized for the task they are doing. If components could always evolve to specifically suit the task at hand, networks could learn to use their resources in parallel. If one had a computer composed of N particles, there are theoretically N2 interactions that could be carried out simultaneously, thus producing incredible parallelism. Unfortunately, for a system to exhibit formal computational behavior, constraints must be introduced to suppress a large fraction of the interactions.

Much has been made of the supposed thermodynamic limits of computing. Bennett and Landauer (Bennett 1973; Landauer 1982) have shown that physical realizations of formal computation processes can, in principle, proceed with arbitrarily low dissipation when speed, reliability, and required memory space are not important considerations. Thermodynamic costs can be traded for other costs that can be restated in terms of components, reliability, and speed. If a machine is structurally programmable, these other costs are high, and hence the costs of adding more components or accepting less speed and reliability in order to reduce the dissipation would soon outweigh the advantages. If the system foregoes structural programmability, the balance changes. This is a probable explanation for the relatively low energy dissipation found in biological computing.

Efficiency versus complexity. Here, the question is: How much bang does one get for one's buck with different types of problems and different types of computers? With a polynomial-time class of problem, the resources may increase with the degree of the problem size. For an exponential-time class problem, the resources required increase at least exponentially with problem size, say as 2n. In the explosive 2n case, a1010-fold increase in resources allows an additive increase in problem size of only 33. The dramatically different capabilities of biological organisms and von Neumann machines are largely due to the fact that the former are capable of solving much larger problems in the polynomial-time class. This may be because of the different mechanisms being used (see chapter 4 of this volume). Also, systems that opt for efficiency and evolutionary adaptability would be better suited to coupling increases in computation resources to increases in problem size.

The biologically motivated molecular computer engineer is not trying to solve the origin-of-life problem or to create "living computers". The more-than-sufficient objective is to exploit the characteristic properties of biological macromolecules to produce devices that perform useful information processing functions.

Looking at how biological organisms process information, one is struck by certain aspects:

  • The ubiquitousness of proteins and a "two-step process" of transcription

  • A very high degree of parallelism

  • A high degree of complexity

Ignoring for the present the question as to whether proteins are the ultimate optimal mechanism or whether nature (and evolution) simply used what was available, it should be pointed out that a very important aspect is the dependence of biological systems for their "information processing" capabilities on what is known as molecular recognition. Molecules bind together weakly with other molecules—not as tightly as one finds in normal covalent bonding, but not so weakly that discrimination cannot be made between different molecules. This recognition is, at base, a quantum effect and is one of the mechanisms by which parallelism is introduced into the system.

Assume a potential surface with many valleys, one of which corresponds to the desired solution of a problem (i.e., the lowest energy state). In the absence of external perturbation (e.g., thermal agitation), a classical system would never make the transition from an incorrect potential well to the correct one. A microscopic (quantum) system, with particles such as electrons, would inevitably find the proper well by virtue of barrier penetration. But to actually exploit this for problem solving, it is necessary to "put a handle on the electron", or in more technical terms, embed the microsystem in a macroscopic architecture where the output state is obvious. Macromolecules—such as proteins—are an intermediate-size architecture that is too large to undergo barrier penetration per se, but its pattern-matching motions may in large part be controlled by the electronic wave function. When a protein is "recognized" by an enzyme, we are seeing the results of a many-valued parallel exploration of phase space made possible by quantum mechanics and carried up to the macroscale. Equivalently, the difficult problem of pattern recognition has been turned into the physical process of minimizing the free energy of the protein-enzyme complex, a natural occurrence in the physical universe.

This book has been written in an attempt to elucidate many of the issues surrounding the above ideas and their application to molecular computing systems. We have tried to organize the chapters in such a way that the flow is from most abstract and most basic to more and more complex or applied systems.

The following chapter (chapter 1) is by Michael Conrad and Klaus-Peter Zauner. Their chapter covers and expands on several of the issues mentioned above, analyzing the uses of proteins and other such molecules from an information processing point of view. Included are analyses of relevant molecular properties and the necessity of the macro-micro-interface. The authors then move into a description of a prototype system, complete with a recipe for how to build it. Finally, chapter 1 ends with descriptions of experimental systems and what has been accomplished so far.

Chapter 2 is by Jean-Marie Lehn and Tanya Sienko. As has already been mentioned, molecular recognition is what underlies many of these biological systems, running from the "information processing" aspects of enzymes to the aspects of selfrecognition, self-assembly, and self-organizing systems. After a short explanation of what molecular recognition is and how to design systems to do it well, the authors work from the simplest types of self-recognition up to the most complex, along with a few comments on information transfer and how concepts of molecular recognition may be used.

The next two chapters, chapters 3 and 4, can be said to constitute the reactiondiffusion part of the book. The chemical components of the media themselves are not all that complex, yet the resultant activity can be extremely complex and lend itself to parallel information processing of certain tasks. Andrew Adamatzky (chapter 3) has contributed a chapter on the theory of computation in nonlinear media. The first part covers a theory of excitable and diffusive processors. Adamatzky then moves to an explanation of and samples of such systems with specialized processors. The chapter ends with a long section on universal processors, with a detailed explanation of the example of collision gates in DNA and monomolecular arrays. Chapter 4, by Nicholas Rambidi, goes further, with descriptions of physical reaction-diffusion systems found in chemical media, the theory of how to build a computer based on such, along with a large number of experimental examples.

At this point, we move back to more biologically oriented systems, and specifically to DNA computing. Carlo Maley (chapter 5) has kindly contributed a review chapter covering present work to date as well as comments on where the field will probably advance in the future, including all the pros and cons of the field.

Chapter 6 has been contributed by Duane Marcy, Bryan Vought, and Robert Birge, and covers bioelectronics and protein-based optical computing. The main focus of this chapter is on the possibilities inherent in biorhodopsin, as well as comments on how optical/biological computing would differ from semiconductor-based systems.

Finally, continuing our trend, we end up with a chapter on the present status of biosensors, contributed by Satoshi Sasaki and Isao Karube (chapter 7). Karube almost single-handedly invented the field many years ago, fusing electronics and micromachines together with biological films or organisms whose reaction (at the molecular level) could be read out (i.e., used) to provide a trigger signal to the micromachine/electronics. Japan remains the country most advanced in this field. We hope our readers will enjoy this review article, which covers a wide range of biosensors and what they can do.

Some final comments should be made about what we have not included in this book: We have not included articles on quantum computing because we feel this discipline lies outside the scope of what we wish to address. Nor, except in a tangential way, have we touched on the field of molecular electronics, where the attempt has been to shrink circuits down even further, to the level of say, a nanotube or a molecular wire. We have also stayed away from neural networks, feeling that there are sufficient books extant to satisfy any seeker of knowledge in that area.

What we have tried to do is sketch out, even if only lightly, certain areas and topics that have remained obscure to most computer scientists and that we feel have great potential for the future. Molecular computing and the non–von Neumann paradigms for information processing remain a vast area in which only a few explorers have left their footprints. We hope that this book is only the first of many guides into this new field.

References

Bennett, C. H. 1973. Logical reversibility of computation. IBM J. Res. Dev. 17: 525–532.

Conrad, M. 1990. Molecular Computing, issue of Advances in Computers, Vol. 31. New York: Academic Press.

Conrad, M., and D. Conrad. 1997. Of maps and territories: A three point landing on the mind-body problem. In Matter Matters, ed. P. Arhem, H. Liljenstrom, and U. Svedin, 107–137. New York: Springer-Verlag.

Landauer, R. 1982. Uncertainty principle and minimal energy dissipation in the computer. Int. J. Theor. Phys. 21: 283–297.




Molecular Computing
Molecular Computing
ISBN: 0262693313
EAN: 2147483647
Year: 2003
Pages: 94

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net