30.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Comments on Your C++ Program

Notice the use of input stream operator cin>> in the C++ program, instead of the C function scanf in several places. The iostream class in C++ was discussed earlier in this chapter. The program works like this:

First, the network input neurons are given their connection weights, and then an input vector is presented to the input layer. A threshold value is specified, and the output neuron does the weighted sum of its inputs, which are the outputs of the input layer neurons. This weighted sum is the activation of the output neuron, and it is compared with the threshold value, and the output neuron fires (output is 1) if the threshold value is not greater than its activation. It does not fire (output is 0) if its activation is smaller than the threshold value. In this implementation, neither supervised nor unsupervised training is incorporated.

Input/Output for percept.cpp

There are two data files used in this program. One is for setting up the weights, and the other for setting up the input vectors. On the command line, you enter the program name followed by the weight file name and the input file name. For this discussion (also on the accompanying disk for this book) create a file called weight.dat, which contains the following data:

   2.0 3.0 3.0 2.0   3.0 0.0 6.0 2.0 

These are two weight vectors. Create also an input file called input.dat with the two data vectors below:

   1.95 0.27 0.69 1.25   0.30 1.05 0.75 0.19 

During the execution of the program, you are first prompted for the number of vectors that are used (in this case, 2), then for a threshold value for the input/weight vectors (use 7.0 in both cases). You will then see the following output. Note that the user input is in italic.

   percept weight.dat input.dat THIS PROGRAM IS FOR A PERCEPTRON NETWORK WITH AN INPUT LAYER OF 4 NEURONS, EACH CONNECTED TO THE OUTPUT NEURON. THIS EXAMPLE TAKES REAL NUMBERS AS INPUT SIGNALS please enter the number of weights/vectors 2 this is vector # 1 please enter a threshold value, eg 7.0 7.0 weight for neuron 1 is  2           activation is 3.9 weight for neuron 2 is  3           activation is 0.81 weight for neuron 3 is  3           activation is 2.07 weight for neuron 4 is  2           activation is 2.5 activation is  9.28 the output neuron activation exceeds the threshold value of 7  output value is 1 this is vector # 2 please enter a threshold value, eg 7.0 7.0 weight for neuron 1 is  3           activation is 0.9 weight for neuron 2 is  0           activation is 0 weight for neuron 3 is  6           activation is 4.5 weight for neuron 4 is  2           activation is 0.38 activation is  5.78 the output neuron activation is smaller than the threshold value of 7 output value is 0 

Finally, try adding a data vector of (1.4, 0.6, 0.35, 0.99) to the data file. Add a weight vector of ( 2, 6, 8, 3) to the weight file and use a threshold value of 8.25 to see the result. You can use other values to experiment also.

Network Modeling

So far, we have considered the construction of two networks, the Hopfield memory and the Perceptron. What are other considerations (which will be discussed in more depth in the chapters to follow) that you should keep in mind ?

Some of the considerations that go into the modeling of a neural network for an application are:

 nature of inputs        fuzzy                  binary                  analog        crisp                  binary                  analog number of inputs nature of outputs        fuzzy                  binary                  analog        crisp                  binary                  analog number of outputs nature of the application        to complete patterns (recognize corrupted patterns)        to classify patterns        to do an optimization        to do approximation        to perform data clustering        to compute functions dynamics     adaptive               learning                            training                                            with exemplars                                            without exemplars               self-organizing     nonadaptive               learning                            training                                            with exemplars                                            without exemplars               self-organizing hidden layers     number               fixed               variable     sizes               fixed               variable processing        additive        multiplicative        hybrid                  additive and multiplicative                  combining other approaches                               expert systems                               genetic algorithms 

Hybrid models, as indicated above, could be of the variety of combining neural network approach with expert system methods or of combining additive and multiplicative processing paradigms.

Decision support systems are amenable to approaches that combine neural networks with expert systems. An example of a hybrid model that combines different modes of processing by neurons is the Sigma Pi neural network, wherein one layer of neurons uses summation in aggregation and the next layer of neurons uses multiplicative processing.

A hidden layer, if only one, in a neural network is a layer of neurons that operates in between the input layer and the output layer of the network. Neurons in this layer receive inputs from those in the input layer and supply their outputs as the inputs to the neurons in the output layer. When a hidden layer comes in between other hidden layers, it receives input and supplies input to the respective hidden layers.

In modeling a network, it is often not easy to determine how many, if any, hidden layers, and of what sizes, are needed in the model. Some approaches, like genetic algorithms—which are paradigms competing with neural network approaches in many situations but nevertheless can be cooperative, as here—are at times used to make a determination on the needed or optimum, as the case may be, numbers of hidden layers and/or the neurons in those hidden layers. In what follows, we outline one such application.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net