61.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Recall of Vectors

When X1 is presented at the input layer, the activation at the output layer will give ( –4, 4, 2) to which we apply the thresholding function, which replaces a positive value by 1, and a negative value by 0.

This then gives us the vector (0, 1, 1) as the output, which is the same as our Y1. Now Y1 is passed back to the input layer through the feedback connections, and the activation of the input layer becomes the vector (2, –2, 2, 2), which after thresholding gives the output vector (1, 0, 1, 1), same as X1. When X2 is presented at the input layer, the activation at the output layer will give (2, –2, 2) to which the thresholding function, which replaces a positive value by 1 and a negative value by 0, is applied. This then gives the vector (1, 0, 1) as the output, which is the same as Y2. Now Y2 is passed back to the input layer through the feedback connections to get the activation of the input layer as (–2, 2, 2, –2), which after thresholding gives the output vector (0, 1, 1, 0), which is X2.


The two vector pairs chosen here for encoding worked out fine, and the BAM network with four neurons in Field A and three neurons in Field B is all set for finding a vector under heteroassociation with a given input vector.

Continuation of Example

Let us now use the vector X3 = (1, 1, 1, 1). The vector Y3 = (0, 1, 1) is obtained at the output layer. But the next step in which we present Y3 in the backward direction does not produce X3, instead it gives an X1 = (1, 0, 1, 1). We already have X1 associated with Y1. This means that X3 is not associated with any vector in the output space. On the other hand, if instead of getting X1 we obtained a different X4 vector, and if this in the feed forward operation produced a different Y vector, then we repeat the operation of the network until no changes occur at either end. Then we will have possibly a new pair of vectors under the heteroassociation established by this BAM network.

Special Case—Complements

If a pair of (distinct) patterns X and Y are found to be heteroassociated by BAM, and if you input the complement of X, complement being obtained by interchanging the 0’s and 1’s in X, BAM will show that the complement of Y is the pattern associated with the complement of X. An example will be seen in the illustrative run of the program for C++ implementation of BAM, which follows.

C++ Implementation

In our C++ implementation of a discrete bidirectional associative memory network, we create classes for neuron and network. Other classes created are called exemplar, assocpair, potlpair, for the exemplar pair of vectors, associated pair of vectors, and potential pairs of vectors, respectively, for finding heteroassociation between them. We could have made one class of pairvect for a pair of vectors and derived the exemplar and so on from it. The network class is declared as a friend class in these other classes. Now we present the header and source files, called bamntwrk.h and bamntwrk.cpp. Since we reused our previous code from the Hopfield network of Chapter 4, there are a few data members of classes that we did not put to explicit use in the program. We call the neuron class bmneuron to remind us of BAM.

Program Details and Flow

A neuron in the first layer is referred to as anrn, and the number of neurons in this layer is referred to as anmbr. We give the namebnrn to the array of neurons in the second layer, and bnmbr denotes the size of that array. The sequence of operations in the program is as follows:

  We ask the user to input the exemplar vectors, and we transform them into their bipolar versions. The trnsfrm ( ) function in the exemplar class is for this purpose.
  We give the network the X vector, in its bipolar version, in one exemplar pair. We find the activations of the elements of bnrn array and get corresponding output vector as a binary pattern. If this is the Y in the exemplar pair, the network has made a desired association in one direction, and we go on to the next.step. Otherwise we have a potential associated pair, one of which is X and the other is what we just got as the output vector in the opposite layer. We say potential associated pair because we have the next step to confirm the association.
  We run the bnrn array through the transpose of the weight matrix and calculate the outputs of the anrn array elements. If , as a result, we get the vector X as the anrn array, we found an associated pair, (X, Y). Otherwise, we repeat the two steps just described until we find an associated pair.
  We now work with the next pair of exemplar vectors in the same manner as above, to find an associated pair.
  We assign serial numbers, denoted by the variable idn, to the associated pairs so we can print them all together at the end of the program. The pair is called (X, Y) where X produces Y through the weight matrix W, and Y produces X through the weight matrix which is the transpose of W.
  A flag is used to have value 0 until confirmation of association is obtained, when the value of the flag changes to 1.
  Functions compr1 and compr2 in the network class verify if the potential pair is indeed an associated pair and set the proper value of the flag mentioned above.
  Functions comput1 and comput2 in the network class carry out the calculations to get the activations and then find the output vector, in the proper directions of the bidirectional associative memory network.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net