C++ Neural Networks and Fuzzy Logic by Valluru B. Rao M&T Books, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 |

Previous | Table of Contents | Next |

The ART1 equations are not easy to follow. We follow the description of the algorithm found in James A. Freeman and David M. Skapura. The following equations, taken in the order given, describe the steps in the algorithm. Note that **binary** input patterns are used in ART1.

w _{ij} | should be positive and less than L / ( m - 1 + L) |

v _{ji} | should be greater than ( B - 1 ) / D |

a = -_{i}B / ( 1 + C ) | |

When you read below the equations for ART1 computations, keep in mind the following considerations. If a subscript *i* appears on the left-hand side of the equation, it means that there are *m* such equations, as the subscript i varies from 1 to m. Similarly, if instead a subscript *j* occurs, then there are *n* such equations as *j* ranges from 1 to *n*. The equations are used in the order they are given. They give a step-by-step description of the following algorithm. All the variables, you recall, are defined in the earlier section on notation. For example, I is the input vector.

**F _{1} layer calculations:**

a_{i}= I_{i}/ ( 1 + A ( I_{i}+ B ) + C ) x_{i}= 1 if a_{i}> 0 0 if a_{i}≤ 0

**F _{2} layer calculations:**

b_{j}= Σ w_{ij}x_{i}, the summation being on i from 1 to m y_{j}= 1 if jth neuron has the largest activation value in the F_{2}layer = 0 if jth neuron is not the winner in F_{2}layer

**Top-down inputs:**

z_{i}= Σv_{ji}y_{j}, the summation being on j from 1 to n (You will notice that exactly one term is nonzero)

**F _{1} layer calculations:**

a_{i}= ( I_{i}+ D z_{i}- B ) / ( 1 + A ( I_{i}+ D z_{i}) + C ) x_{i}= 1 if a_{i}> 0 = 0 if a_{i}≤ 0

**Checking with vigilance parameter:**

If ( *S _{x}* /

If ( *S _{x}* /

**Modifying top-down and bottom-up connection weight for winner r:**

v_{ir}= ( L / ( S_{x}+ L -1 ) if x_{i}= 1 = 0 if x_{i}= 0 w_{ri}= 1 if x_{i}= 1 = 0 if x_{i}= 0

Having finished with the current input pattern, we repeat these steps with a new input pattern. We lose the index *r* given to one neuron as a winner and treat all neurons in the F_{2} layer with their original indices (subscripts).

The above presentation of the algorithm is hoped to make all the steps as clear as possible. The process is rather involved. To recapitulate, first an input vector is presented to the F_{1} layer neurons, their activations are determined, and then the **threshold** function is used. The outputs of the F_{1} layer neurons constitute the inputs to the F_{2} layer neurons, from which a winner is designated on the basis of the largest activation. The winner only is allowed to be active, meaning that the output is 1 for the winner and 0 for all the rest. The equations implicitly incorporate the use of the 2/3 rule that we mentioned earlier, and they also incorporate the way the gain control is used. The gain control is designed to have a value 1 in the phase of determining the activations of the neurons in the F_{2} layer and 0 if either there is no input vector or output from the F_{2} layer is propagated to the F_{1} layer.

Extensions of an ART1 model, which is for binary patterns, are ART2 and ART3. Of these, ART2 model categorizes and stores analog-valued patterns, as well as binary patterns, while ART3 addresses computational problems of hierarchies.

Again, the algorithm for ART1 processing as given in Freeman and Skapura is followed for our C++ implementation. Our objective in programming ART1 is to provide a feel for the workings of this paradigm with a very simple program implementation. For more details on the inner workings of ART1, you are encouraged to consult Freeman and Skapura, or other references listed at the back of the book.

The header file for the C++ program for the ART1 model network is art1net.hpp. It contains the declarations for two classes, an **artneuron** class for neurons in the ART1 model, and a **network** class, which is declared as a **friend** class in the **artneuron** class. Functions declared in the **network** class include one to do the iterations for the network operation, finding the winner in a given iteration, and one to inquire if reset is needed.

//art1net.h V. Rao, H. Rao //Header file for ART1 model network program #include <iostream.h> #define MXSIZ 10 class artneuron { protected: int nnbr; int inn,outn; int output; double activation; double outwt[MXSIZ]; char *name; friend class network; public: artneuron() { }; void getnrn(int,int,int,char *); }; class network { public: int anmbr,bnmbr,flag,ninpt,sj,so,winr; float ai,be,ci,di,el,rho; artneuron (anrn)[MXSIZ],(bnrn)[MXSIZ]; int outs1[MXSIZ],outs2[MXSIZ]; int lrndptrn[MXSIZ][MXSIZ]; double acts1[MXSIZ],acts2[MXSIZ]; double mtrx1[MXSIZ][MXSIZ],mtrx2[MXSIZ][MXSIZ]; network() { }; void getnwk(int,int,float,float,float,float,float); void prwts1(); void prwts2(); int winner(int k,double *v,int); void practs1(); void practs2(); void prouts1(); void prouts2(); void iterate(int *,float,int); void asgninpt(int *); void comput1(int); void comput2(int *); void prlrndp(); void inqreset(int); void adjwts1(); void adjwts2(); };

Previous | Table of Contents | Next |

Copyright © IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic

ISBN: 1558515526

EAN: 2147483647

EAN: 2147483647

Year: 1995

Pages: 139

Pages: 139

Authors: Valluru B. Rao, Hayagriva Rao

Similar book on Amazon

flylib.com © 2008-2017.

If you may any questions please contact us: flylib@qtcs.net

If you may any questions please contact us: flylib@qtcs.net