57.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


The backprop.cpp file implements the simulator controls. First, data is accepted from the user for network parameters. Assuming Training mode is used, the training file is opened and data is read from the file to fill the IO buffer. Then the main loop is executed where the network processes pattern by pattern to complete a cycle, which is one pass through the entire training data set. (The IO buffer is refilled as required during this process.) After executing one cycle, the file pointer is reset to the beginning of the file and another cycle begins. The simulator continues with cycles until one of the two fundamental criteria is met:

1.  The maximum cycle count specified by the user is exceeded.
2.  The average error per pattern for the latest cycle is less than the error tolerance specified by the user.

When either of these occurs, the simulator stops and reports out the error achieved, and saves weights in the weights.dat file and one output vector in the output.dat file.

In Test mode, exactly one cycle is processed by the network and outputs are written to the output.dat file. At the beginning of the simulation in Test mode, the network is set up with weights from the weights.dat file. To simplify the program, the user is requested to enter the number of layers and size of layers, although you could have the program figure this out from the weights file.

Compiling and Running the Backpropagation Simulator

Compiling the backprop.cpp file will compile the simulator since layer.cpp is included in backprop.cpp. To run the simulator, once you have created an executable (using 80X87 floating point hardware if available), you type in backprop and see the following screen (user input in italic):

 C++ Neural Networks and Fuzzy Logic        Backpropagation simulator                version 1 Please enter 1 for TRAINING on, or 0 for off: Use training to change weights according to your expected outputs. Your training.dat file should contain a set of inputs and expected outputs. The number of inputs determines the size of the first (input) layer while the number of outputs determines the size of the last (output) layer : 1 -> Training mode is *ON*. weights will be saved in the file weights.dat at the end of the current set of input (training) data  Please enter in the error_tolerance  -- between 0.001 to 100.0, try 0.1 to start -- and the learning_parameter, beta  -- between 0.01 to 1.0, try 0.5 to start --  separate entries by a space  example: 0.1 0.5 sets defaults mentioned : 0.2 0.25 Please enter the maximum cycles for the simulation A cycle is one pass through the data set. Try a value of 10 to start with Please enter in the number of layers for your network. You can have a minimum of three to a maximum of five. three implies one hidden layer; five implies three hidden layers: 3 Enter in the layer sizes separated by spaces. For a network with three neurons in the input layer, two neurons in a hidden layer, and four neurons in the output layer, you would enter: 3 2 4. You can have up to three hidden layers for five maximum entries : 2 2 1 1        0.353248 2        0.352684 3        0.352113 4        0.351536 5        0.350954 ... 299      0.0582381 300      0.0577085 ------------------------          done:   results in file output.dat                  training: last vector only                  not training: full cycle                  weights saved in file weights.dat -->average error per cycle = 0.20268 <-- -->error last cycle = 0.0577085 <-- ->error last cycle per pattern= 0.0577085 <-- ------>total cycles = 300 <-- ------>total patterns = 300 <-- 


The cycle number and the average error per pattern is displayed as the simulation progresses (not all values shown). You can monitor this to make sure the simulator is converging on a solution. If the error does not seem to decrease beyond a certain point, but instead drifts or blows up, then you should start the simulator again with a new starting point defined by the random weights initializer. Also, you could try decreasing the size of the learning rate parameter. Learning may be slower, but this may allow a better minimum to be found.

This example shows just one pattern in the training set with two inputs and one output. The results along with the (one) last pattern are shown as follows from the file output.dat:

 for input vector: 0.400000  -0.400000 output vector is: 0.842291 expected output vector is: 0.900000 

The match is pretty good, as can be expected, since the optimization is easy for the network; there is only one pattern to worry about. Let’s look at the final set of weights for this simulation in weights.dat. These weights were obtained by updating the weights for 300 cycles with the learning law:

      1 0.175039 0.435039      1 -1.319244 -0.559244      2 0.358281      2 2.421172 

We’ll leave the backpropagation simulator for now and return to it in a later chapter for further exploration. You can experiment a number of different ways with the simulator:

  Try a different number of layers and layer sizes for a given problem.
  Try different learning rate parameters and see its effect on convergence and training time.
  Try a very large learning rate parameter (should be between 0 and 1); try a number over 1 and note the result.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net