90.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Although both of these implementations are valid, the second is particularly useful, since it adds a term that is significant for all patterns, and hence would contribute to global error reduction. We implement the second choice by accumulating the value of the current cycle weight changes in a vector called cum_deltas. The past cycle weight changes are stored in a vector called past_deltas. These are shown as follows in a portion of the layer.h file.

 class output_layer:   public layer { protected:        float * weights;        float * output_errors; // array of errors at output        float * back_errors; // array of errors back-propagated        float * expected_values;      // to inputs        float * cum_deltas;   // for momentum        float * past_deltas;  // for momentum   friend network; ... 

Changes to the layer.cpp File

The implementation file for the layer class changes in the output_layer::update_weights() routine and the constructor and destructor for output_layer. First, here is the constructor for output_layer. Changes are highlighted in italic.

 output_layer::output_layer(int ins, int outs) { int i, j, k; num_inputs=ins; num_outputs=outs; weights = new float[num_inputs*num_outputs]; output_errors = new float[num_outputs]; back_errors = new float[num_inputs]; outputs = new float[num_outputs]; expected_values = new float[num_outputs]; cum_deltas = new float[num_inputs*num_outputs]; past_deltas = new float[num_inputs*num_outputs]; if ((weights==0)||(output_errors==0)||(back_errors==0)        ||(outputs==0)||(expected_values==0)        ||(past_deltas==0)||(cum_deltas==0))        {        cout << "not enough memory\n";        cout << "choose a smaller architecture\n";        exit(1);        } // zero cum_deltas and past_deltas matrix for (i=0; i< num_inputs; i++)        {        k=i*num_outputs;        for (j=0; j< num_outputs; j++)               {               cum_deltas[k+j]=0;               past_deltas[k+j=0;               }        } } 

The destructor simply deletes the new vectors:

 output_layer::~output_layer() { // some compilers may require the array // size in the delete statement; those // conforming to Ansi C++ will not delete [num_outputs*num_inputs] weights; delete [num_outputs] output_errors; delete [num_inputs] back_errors; delete [num_outputs] outputs; delete [num_outputs*num_inputs] past_deltas; delete [num_outputs*num_inputs] cum_deltas; } 

Now let’s look at the update_weights() routine changes:

 void output_layer::update_weights(const float beta,                                      const float alpha) { int i, j, k; float delta; // learning law: weight_change = //             beta*output_error*input + alpha*past_delta for (i=0; i< num_inputs; i++)        {        k=i*num_outputs;        for (j=0; j< num_outputs; j++)               {               delta=beta*output_errors[j]*(*(inputs+i))               +alpha*past_deltas[k+j];               weights[k+j] += delta;               cum_deltas[k+j]+=delta; // current cycle               }        } } 

The change to the training law amounts to calculating a delta and adding it to the cumulative total of weight changes in cum_deltas. At some point (at the start of a new cycle) you need to set the past_deltas vector to the cum_delta vector. Where does this occur? Since the layer has no concept of cycle, this must be done at the network level. There is a network level function called update_momentum at the beginning of each cycle that in turns calls a layer level function of the same name. The layer level function swaps the past_deltas vector and the cum_deltas vector, and reinitializes the cum_deltas vector to zero. We need to return to the layer.h file to see changes that are needed to define the two functions mentioned.

 class output_layer:   public layer { protected:        float * weights;        float * output_errors; // array of errors at output        float * back_errors; // array of errors back-propagated        float * expected_values;     // to inputs        float * cum_deltas;   // for momentum        float * past_deltas;   // for momentum   friend network; public:        output_layer(int, int);        ~output_layer();        virtual void calc_out();        void calc_error(float &);        void randomize_weights();        void update_weights(const float, const float);        void update_momentum();        void list_weights();        void write_weights(int, FILE *);        void read_weights(int, FILE *);        void list_errors();        void list_outputs(); }; class network { private: layer *layer_ptr[MAX_LAYERS];     int number_of_layers;     int layer_size[MAX_LAYERS];     float *buffer;     fpos_t position;     unsigned training; public:   network();     ~network();                void set_training(const unsigned &);                unsigned get_training_value();                void get_layer_info();                void set_up_network();                void randomize_weights();                void update_weights(const float, const float);                void update_momentum();                ... 

At both the network and output_layer class levels the function prototype for the update_momentum member functions are highlighted. The implementation for these functions are shown as follows from the layer.cpp class.

 void output_layer::update_momentum() { // This function is called when a // new cycle begins; the past_deltas // pointer is swapped with the // cum_deltas pointer. Then the contents // pointed to by the cum_deltas pointer // is zeroed out. int i, j, k; float * temp; // swap temp = past_deltas; past_deltas=cum_deltas; cum_deltas=temp; // zero cum_deltas matrix // for new cycle for (i=0; i< num_inputs; i++)        {        k=i*num_outputs;        for (j=0; j< num_outputs; j++)               cum_deltas[k+j]=0;        } } void network::update_momentum() { int i; for (i=1; i<number_of_layers; i++)        ((output_layer *)layer_ptr[i])                ->update_momentum(); } 


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net