87.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Other Experiments to Try

You can try other experiments with the program. For example, you can repeat the input file but with the order of the entries changed. In other words, you can present the same inputs a number of times in different order. This actually helps the Kohonen network train faster. You can try applying garbled versions of the characters to see if the network distinguishes them. Just as in the backpropagation program, you can save the weights in a weight file to freeze the state of training, and then apply new inputs. You can enter all of the characters from A to Z and see the classification that results. Do you need to train on all of the characters or a subset? You can change the size of the Kohonen layer. How many neurons do you need to recognize the complete alphabet?

There is no restriction on using digital inputs of 1 and 0 as we had used. You can apply grayscale analog values. The program will display the input pattern according to the quantization levels that were set. This set can be expanded, and you can use a graphics interface to display more levels. You can then try pattern recognition of arbitrary images, but remember that processing time will increase rapidly with the number of neurons used. The number of input neurons you choose is dictated by the image resolution, unless you filter and/or subsample the image before presenting it to the network. Filtering is the process of using a type of averaging function applied to groups of pixels. Subsampling is the process of choosing a lower-output image resolution by selecting fewer pixels than a source image. If you start with an image that is 100 × 100 pixels, you can subsample this image 2:1 in each direction to obtain an image that is one-fourth the size, or 50 × 50 pixels. Whether you throw away every other pixel to get this output resolution or apply a filter is up to you. You could average every two pixels to get one output pixel as an example of a very simple filter.

Summary

The following list highlights the important Kohonen program features which you learned in this chapter.

  This chapter presented a simple character recognition program using a Kohonen feature map.
  The input vectors and the weight vectors were displayed to show convergence and note similarity between the two vectors.
  As training progresses, the weight vector for the winner neuron resembles the input character map.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net