88.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Chapter 13
Backpropagation II

Enhancing the Simulator

In Chapter 7, you developed a backpropagation simulator. In this chapter, you will put it to use with examples and also add some new features to the simulator: a term called momentum, and the capability of adding noise to the inputs during simulation. There are many variations of the algorithm that try to alleviate two problems with backpropagation. First, like other neural networks, there is a strong possibility that the solution found with backpropagation is not a global error minimum, but a local one. You may need to shake the weights a little by some means to get out of the local minimum, and possibly arrive at a lower minimum. The second problem with backpropagation is speed. The algorithm is very slow at learning. There are many proposals for speeding up the search process. Neural networks are inherently parallel processing architectures and are suited for simulation on parallel processing hardware. While there are a few plug-in neural net or digital signal processing boards available in the market, the low-cost simulation platform of choice remains the personal computer. Speed enhancements to the training algorithm are therefore very buffernecessary.

Another Example of Using Backpropagation

Before modifying the simulator to add features, let’s look at the same problem we used the Kohonen map to analyze in Chapter 12. As you recall, we would like to be able to distinguish alphabetic characters by assigning them to different bins. For backpropagation, we would apply the inputs and train the network with anticipated responses. Here is the input file that we used for distinguishing five different characters, A, X, H, B, and I:

 0 0 1 0 0  0 1 0 1 0  1 0 0 0 1  1 0 0 0 1  1 1 1 1 1  1 0 0 0 1 1 0 0 0 1  0 1 0 1 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 1 0 1 0 1 0 0 0 1  1 0 0 0 1  1 0 0 0 1  1 1 1 1 1  1 0 0 0 1  1 0 0 0 1 1 1 1 1 1  1 0 0 0 1  1 0 0 0 1  1 1 1 1 1  1 0 0 0 1  1 0 0 0 1 0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 0 0 1 0 0 

Each line has a 5x7 dot representation of each character. Now we need to name each of the output categories. We can assign a simple 3-bit representation as follows:


A 000
X 010
H 100
B 101
I 111

Let’s train the network to recognize these characters. The training.dat file looks like the following.

 0 0 1 0 0  0 1 0 1 0  1 0 0 0 1  1 0 0 0 1  1 1 1 1 1  1 0 0 0 1 1 0 0 0 1  0 1 0 1 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 1 0 1 0 1 0 0 0 1  1 0 0 0 1  1 0 0 0 1  1 1 1 1 1  1 0 0 0 1  1 0 0 0 1 1 1 1 1 1  1 0 0 0 1  1 0 0 0 1  1 1 1 1 1  1 0 0 0 1  1 0 0 0 1 0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0  0 0 1 0 0 1 0 0 0 1  0 0 0 1 0 0 0 1  0 1 0 1 0 0 0 1  1 0 0 1 1 1 1 1  1 0 1 0 0 1 0 0  1 1 1 

Now you can start the simulator. Using the parameters (beta = 0.1, tolerance = 0.001, and max_cycles = 1000) and with three layers of size 35 (input), 5 (middle), and 3 (output), you will get a typical result like the following.

 -------------------------- -          done:   results in file output.dat                         training: last vector only                         not training: full cycle                         weights saved in file weights.dat -->average error per cycle = 0.035713<-- -->error last cycle = 0.008223 <-- ->error last cycle per pattern= 0.00164455 <-- ------>total cycles = 1000 <-- ------>total patterns = 5000 <-- --------------------------- 

The simulator stopped at the 1000 maximum cycles specified in this case. Your results will be different since the weights start at a random point. Note that the tolerance specified was nearly met. Let us see how close the output came to what we wanted. Look at the output.dat file. You can see the match for the last pattern as follows:

 for input vector: 0.000000  0.000000  1.000000  0.000000  0.000000  0.000000  0.000000 1.000000  0.000000  0.000000  0.000000  0.000000  1.000000  0.000000 0.000000  0.000000  0.000000  1.000000  0.000000  0.000000  0.000000 0.000000  1.000000  0.000000  0.000000  0.000000  0.000000  1.000000 0.000000  0.000000  0.000000  0.000000  1.000000  0.000000  0.000000 output vector is: 0.999637  0.998721  0.999330 expected output vector is: 1.000000  1.000000  1.000000 ----------- 


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net