50.

c++ neural networks and fuzzy logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Adjustments to Threshold Values or Biases

The bias or the threshold value we added to the activation, before applying the threshold function to get the output of a neuron, will also be adjusted based on the error being propagated back. The needed values for this are in the previous discussion.

The adjustment for the threshold value of a neuron in the output layer is obtained by multiplying the calculated error (not just the difference) in the output at the output neuron and the learning rate parameter used in the adjustment calculation for weights at this layer. In our previous example, we have the learning rate parameter as 0.2, and the error vector as (–0.02, –0.04, 0.04, 0.11), so the adjustments to the threshold values of the four output neurons are given by the vector (–0.004, –0.008, 0.008, 0.022). These adjustments are added to the current levels of threshold values at the output neurons.

The adjustment to the threshold value of a neuron in the hidden layer is obtained similarly by multiplying the learning rate with the computed error in the output of the hidden layer neuron. Therefore, for the second neuron in the hidden layer, the adjustment to its threshold value is calculated as 0.15 * –0.0041, which is –0.0006. Add this to the current threshold value of 0.679 to get 0.6784, which is to be used for this neuron in the next training pattern for the neural network.

Another Example of Backpropagation Calculations

You have seen, in the preceding sections, the details of calculations for one particular neuron in the hidden layer in a feedforward backpropagation network with five input neurons and four neurons in the output layer, and two neurons in the hidden layer.

You are going to see all the calculations in the C++ implementation later in this chapter. Right now, though, we present another example and give the complete picture of the calculations done in one completed iteration or cycle of backpropagation.

Consider a feedforward backpropagation network with three input neurons, two neurons in the hidden layer, and three output neurons. The weights on connections from the input neurons to the neurons in the hidden layer are given in Matrix M-1, and those from the neurons in the hidden layer to output neurons are given in Matrix M-2.

We calculate the output of each neuron in the hidden and output layers as follows. We add a bias or threshold value to the activation of a neuron (call this result x) and use the sigmoid function below to get the output.

      f(x) = 1/ (1 + e-x ) 

Learning parameters used are 0.2 for the connections between the hidden layer neurons and output neurons and 0.15 for the connections between the input neurons and the neurons in the hidden layer. These values as you recall are the same as in the previous illustration, to make it easy for you to follow the calculations by comparing them with similar calculations in the preceding sections.

The input pattern is ( 0.52, 0.75, 0.97 ), and the desired output pattern is ( 0.24, 0.17, 0.65). The initial weight matrices are as follows:

M-1 Matrix of weights from input layer to hidden layer

        0.6     - 0.4        0.2       0.8      - 0.5       0.3 

M-2 Matrix of weights from hidden layer to output layer

      -0.90       0.43       0.25       0.11     - 0.67     - 0.75 

The threshold values (or bias) for neurons in the hidden layer are 0.2 and 0.3, while those for the output neurons are 0.15, 0.25, and 0.05, respectively.

Table 7.1 presents all the results of calculations done in the first iteration. You will see modified or new weight matrices and threshold values. You will use these and the original input vector and the desired output vector to carry out the next iteration.

Table 7.1 Backpropagation Calculations

Item I-1 I-2 I-3 H-1 H-2 O-1 O-2 O-3
Input 0.52 0.75 0.97
Desired Output 0.24 0.17 0.65
M-1 Row 1 0.6 - 0.4
M-1 Row 2 0.2 0.8
M-1 Row 3 - 0.5 0.3
M-2 Row 1 - 0.90 0.43 0.25
M-2 Row 2 0.11 - 0.67 - 0.75
Threshold 0.2 0.3 0.15 0.25 0.05
Activation - H - 0.023 0.683
Activation + Threshold -H 0.177 0.983
Output -H 0.544 0.728
Complement 0.456 0.272
Activation -O - 0.410 - 0.254 - 0.410
Activation + Threshold -O - 0.260 - 0.004 - 0.360
Output -O 0.435 0.499 0.411
Complement 0.565 0.501 0.589
Diff. from Target - 0.195 - 0.329 0.239
Computed Error -O - 0.048 - 0.082 0.058
Computed Error -H 0.0056 0.0012
Adjustment to Threshold 0.0008 0.0002 -0.0096 -0.0164 0.0116
Adjustment to M-2 Column 1 -0.0005 -0.0070
Adjustment to M-2 Column 2 0.0007 0.0008
Adjustment to M-2 Column 3 0.0008 0.0011
New Matrix M-2 Row 1 - 0.91 0.412 0.262
New Matrix M-2 Row 2 0.096 - 0.694 - 0.734
New Threshold Values -O 0.1404 0.2336 0.0616
Adjustment to M-1 Row 1 0.0004 -0.0001
Adjustment to M-1 Row 2 0.0006 0.0001
Adjustment to M-1 Row 3 0.0008 0.0002
New Matrix M-1 Row 1 0.6004 - 0.4
New Matrix M-1 Row 2 0.2006 0.8001
New Matrix M-1 Row 3 -0.4992 0.3002
New Threshold Values -H 0.2008 0.3002


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.



C++ Neural Networks and Fuzzy Logic
C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net