65.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Additional Issues

If you desire to use vectors with real number components, instead of binary numbers, you can do so. Your model is then called a Continuous Bidirectional Associative Memory. A matrix W and its transpose are used for the connection weights. However, the matrix W is not formulated as we described so far. The matrix is arbitrarily chosen, and kept constant. The thresholding function is also chosen as a continuous function, unlike the one used before. The changes in activations of neurons during training are according to extensions the Cohen-Grossberg paradigm.

Michael A. Cohen and Stephen Grossberg developed their model for autoassociation, with a symmetric matrix of weights. Stability is ensured by the Cohen-Grossberg theorem; there is no learning.

If, however, you desire the network to learn new pattern pairs, by modifying the weights so as to find association between new pairs, you are designing what is called an Adaptive Bidirectional Associative Memory (ABAM).

The law that governs the weight changes determines the type of ABAM you have, namely, the Competitive ABAM, or the Differential Hebbian ABAM, or the Differential Competitive ABAM. Unlike in the ABAM model, which is additive type, some products of outputs from the two layers, or the derivatives of the threshold functions are used in the other models.

Here we present a brief description of a model, which is a variation of the BAM. It is called UBBAM (Unipolar Binary Bidirectional Associative Memory).

Unipolar Binary Bidirectional Associative Memory

T. C. B. Yu and R. J. Mears describe a design of unipolar binary bidirectional associative memory, and its implementation with a Smart Advanced Spatial Light Modulator (SASLM). The SASLM device is a ferroelectric liquid crystal spatial light modulator. It is driven by a silicon CMOS backplane. We use their paper to present some features of a unipolar binary bidirectional associative memory, and ignore the hardware implementation of it.

Recall the procedure by which you determine the weight matrix W for a BAM network, as described in the previous pages. As a first step, you convert each vector in the exemplar pairs into its bipolar version. If X and Y are an exemplar pair (in bipolar versions), you take the product XT Y and add it to similar products from other exemplar pairs, to get a weight matrix W. Some of the elements of the matrix W may be negative numbers. In the unipolar context you do not have negative values and positive values at the same time. Only one of them is allowed. Suppose you do not want any negative numbers; then one way of remedying the situation is by adding a large enough constant to each element of the matrix. You cannot choose to add to only the negative numbers that show up in the matrix. Let us look at an example.

Suppose you choose two pairs of vectors as possible exemplars. Let them be,

 X1 = (1, 0, 0, 1), Y1= (0, 1, 1) and X2 = (0, 1, 1, 0), Y2 = (1, 0, 1) 

These you change into bipolar components and get, respectively, (1, –1, –1, 1), (–1, 1, 1), (–1, 1, 1, –1), and (1, –1, 1). The calculation of W for the BAM was done as follows.

      1 [-1 1 1]   -1 [1 -1 1]  -1  1  1    -1  1 -1   -2  2 0 W = -1       + 1         =  1 -1 -1+1 -1  1 =  2 -2 0     -1             1            1 -1 -1     1 -1  1    2 -2 0      1            -1           -1  1  1    -1  1 -1   -2  2 0 

and

                   -2    2    2   -2 WT =    2   -2   -2    2                    0    0    0    0 

You see some negative numbers in the matrix W. If you add the constant m, m = 2, to all the elements in the matrix, you get the following matrix, denoted by W~.

                    0    4    2 W~ =    4    0    2                    4    0    2                    0    4    2 

You need a modification to the thresholding function as well. Earlier you had the following function.

       1  if yj > 0   1   if  xi > 0       bj|t+1 = bj|t    if  yj = 0   and      ai|t+1 = ai|t     if  xi = 0       0  if  yj < 0  0   if  xi < 0 

Now you need to change the right-hand sides of the conditions from 0 to the product of m and sum of the inputs of the neurons, that is to m times the sum of the inputs. For brevity, let us use Si for the sum of the inputs. This is not a constant, and its value depends on what values you get for neuron inputs in any one direction. Then the thresholding function can be given as follows:

       1   if yj > m Si         1  if  xi > m Si bj|t+1 = bj|t      if  yj = m Si       and      ai|t+1 = ai|t     if  xi = m Si        0  if  yj < m Si                         0  if  xi < m Si 

For example, the vector X1 = (1, 0, 0, 1) has the activity vector (0, 8, 4) and the corresponding output vector is (0, 1, 1), which is, Y1 = (0, 1, 1). The value of Si is 2 = 1 + 0 + 0 + 1. Therefore, mSi = 4. The first component of the activation vector is 0, which is smaller than 4, and so the first component of the output vector is 0. The second component of the activation vector is 8, which is larger than 4, and so the second component of the output vector is 1. The last component of the activity vector is 4, which is equal to mSi, and so you use the third component of the vector Y1 = (0, 1, 1), namely 1, as the third component of the output.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net