27.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


A New Weight Matrix to Recall More Patterns

Let’s continue to discuss this example. Suppose we are interested in having the patterns E = (1, 0, 0, 1) and F = (0, 1, 1, 0) also recalled correctly, in addition to the patterns A and B. In this case we would need to train the network and come up with a learning algorithm, which we will discuss in more detail later in the book. We come up with the matrix W1, which follows.

              0    -5     4    4      W1 =   -5     0     4    4              4     4     0   -5              4     4    -5    0 

Try to use this modification of the weight matrix in the source program, and then compile and run the program to see that the network successfully recalls all four patterns A, B, E, and F.


NOTE:  The C++ implementation shown does not include the asynchronous update feature mentioned in Chapter 1, which is not necessary for the patterns presented. The coding of this feature is left as an exercise for the reader.

Weight Determination

You may be wondering about how these weight matrices were developed in the previous example, since so far we’ve only discussed how the network does its job, and how to implement the model. You have learned that the choice of weight matrix is not necessarily unique. But you want to be assured that there is some established way besides trial and error, in which to construct a weight matrix. You can go about this in the following way.

Binary to Bipolar Mapping

Let’s look at the previous example. You have seen that by replacing each 0 in a binary string with a –1, you get the corresponding bipolar string. If you keep all 1’s the same and replace each 0 with a –1, you will have a formula for the above option. You can apply the following function to each bit in the string:

      f(x) = 2x – 1 


NOTE:  When you give the binary bit x, you get the corresponding bipolar character f(x)

For inverse mapping, which turns a bipolar string into a binary string, you use the following function:

      f(x) =  (x + 1) / 2 


NOTE:  When you give the bipolar character x, you get the corresponding binary bit f(x)

Pattern’s Contribution to Weight

Next, we work with the bipolar versions of the input patterns. You take each pattern to be recalled, one at a time, and determine its contribution to the weight matrix of the network. The contribution of each pattern is itself a matrix. The size of such a matrix is the same as the weight matrix of the network. Then add these contributions, in the way matrices are added, and you end up with the weight matrix for the network, which is also referred to as the correlation matrix. Let us find the contribution of the pattern A = (1, 0, 1, 0):

First, we notice that the binary to bipolar mapping of A = (1, 0, 1, 0) gives the vector (1, –1, 1, –1).

Then we take the transpose, and multiply, the way matrices are multiplied, and we see the following:

      1  [1   -1   1   -1]       1   -1   1   -1      1                     =   -1    1  -1    1      1                          1   -1   1   -1      1                         -1    1  -1    1 

Now subtract 1 from each element in the main diagonal (that runs from top left to bottom right). This operation gives the same result as subtracting the identity matrix from the given matrix, obtaining 0’s in the main diagonal. The resulting matrix, which is given next, is the contribution of the pattern (1, 0, 1, 0) to the weight matrix.

       0      -1      1     -1      -1       0     -1      1       1      -1      0     -1      -1       1     -1      0 

Similarly, we can calculate the contribution from the pattern B = (0, 1, 0, 1) by verifying that pattern B’s contribution is the same matrix as pattern A’s contribution. Therefore, the matrix of weights for this exercise is the matrix W shown here.

            0     -2      2      -2   W  =    -2      0     -2       2            2     -2      0      -2           -2      2     -2       0 

You can now optionally apply an arbitrary scalar multiplier to all the entries of the matrix if you wish. This is how we had previously obtained the +/- 3 values instead of +/- 2 values shown above.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net