9.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Noise

Noise is perturbation, or a deviation from the actual. A data set used to train a neural network may have inherent noise in it, or an image may have random speckles in it, for example. The response of the neural network to noise is an important factor in determining its suitability to a given application. In the process of training, you may apply a metric to your neural network to see how well the network has learned your training data. In cases where the metric stabilizes to some meaningful value, whether the value is acceptable to you or not, you say that the network converges. You may wish to introduce noise intentionally in training to find out if the network can learn in the presence of noise, and if the network can converge on noisy data.

Memory

Once you train a network on a set of data, suppose you continue training the network with new data. Will the network forget the intended training on the original set or will it remember? This is another angle that is approached by some researchers who are interested in preserving a network’s long-term memory (LTM) as well as its short-term memory (STM). Long-term memory is memory associated with learning that persists for the long term. Short-term memory is memory associated with a neural network that decays in some time interval.

Capsule of History

You marvel at the capabilities of the human brain and find its ways of processing information unknown to a large extent. You find it awesome that very complex situations are discerned at a far greater speed than what a computer can do.

Warren McCulloch and Walter Pitts formulated in 1943 a model for a nerve cell, a neuron, during their attempt to build a theory of self-organizing systems. Later, Frank Rosenblatt constructed a Perceptron, an arrangement of processing elements representing the nerve cells into a network. His network could recognize simple shapes. It was the advent of different models for different applications.

Those working in the field of artificial intelligence (AI) tried to hypothesize that you can model thought processes using some symbols and some rules with which you can transform the symbols.

A limitation to the symbolic approach is related to how knowledge is representable. A piece of information is localized, that is, available at one location, perhaps. It is not distributed over many locations. You can easily see that distributed knowledge leads to a faster and greater inferential process. Information is less prone to be damaged or lost when it is distributed than when it is localized. Distributed information processing can be fault tolerant to some degree, because there are multiple sources of knowledge to apply to a given problem. Even if one source is cut off or destroyed, other sources may still permit solution to a problem. Further, with subsequent learning, a solution may be remapped into a new organization of distributed processing elements that exclude a faulty processing element.

In neural networks, information may impact the activity of more than one neuron. Knowledge is distributed and lends itself easily to parallel computation. Indeed there are many research activities in the field of hardware design of neural network processing engines that exploit the parallelism of the neural network paradigm. Carver Mead, a pioneer in the field, has suggested analog VLSI (very large scale integration) circuit implementations of neural networks.

Neural Network Construction

There are three aspects to the construction of a neural network:

1.  Structure—the architecture and topology of the neural network
2.  Encoding—the method of changing weights
3.  Recall—the method and capacity to retrieve information

Let’s cover the first one—structure. This relates to how many layers the network should contain, and what their functions are, such as for input, for output, or for feature extraction. Structure also encompasses how interconnections are made between neurons in the network, and what their functions are.

The second aspect is encoding. Encoding refers to the paradigm used for the determination of and changing of weights on the connections between neurons. In the case of the multilayer feed-forward neural network, you initially can define weights by randomization. Subsequently, in the process of training, you can use the backpropagation algorithm, which is a means of updating weights starting from the output backwards. When you have finished training the multilayer feed-forward neural network, you are finished with encoding since weights do not change after training is completed.

Finally, recall is also an important aspect of a neural network. Recall refers to getting an expected output for a given input. If the same input as before is presented to the network, the same corresponding output as before should result. The type of recall can characterize the network as being autoassociative or heteroassociative. Autoassociation is the phenomenon of associating an input vector with itself as the output, whereas heteroassociation is that of recalling a related vector given an input vector. You have a fuzzy remembrance of a phone number. Luckily, you stored it in an autoassociative neural network. When you apply the fuzzy remembrance, you retrieve the actual phone number. This is a use of autoassociation. Now if you want the individual’s name associated with a given phone number, that would require heteroassociation. Recall is closely related to the concepts of STM and LTM introduced earlier.

The three aspects to the construction of a neural network mentioned above essentially distinguish between different neural networks and are part of their design process.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net