# 52. C++ Neural Networks and Fuzzy Logic by Valluru B. Rao M&T Books, IDG Books Worldwide, Inc. ISBN: 1558515526   Pub Date: 06/01/95

#### Equations

Output of jth hidden layer neuron:

yj = f( (Σi xiM1[ i ][ j ] ) + θj )

(7.1)

Output of jth output layer neuron:

zj = f( (Σi yiM2[ i ][ j ] ) + τj )

(7.2)

Ith component of vector of output differences:

desired value - computed value = Pi – zi

Ith component of output error at the output layer:

ei = ( Pi - zi)

(7.3)

Ith component of output error at the hidden layer:

ti = yi (1 - yi ) (Σj M2[ i ][ j ] ej)

(7.4)

Adjustment for weight between ith neuron in hidden layer and jth output neuron:

ΔM2[ i ][ j ] = βo yiej

(7.5)

Adjustment for weight between ith input neuron and jth neuron in hidden layer:

M1[ i ][ j ] = βhxitj

(7.6)

Adjustment to the threshold value or bias for the jth output neuron:

Δ θj = βo ej

Adjustment to the threshold value or bias for the jth hidden layer neuron:

δθj = βh ej

For use of momentum parameter α (more on this parameter in Chapter 13), instead of equations 7.5 and 7.6, use:

ΔM2[ i ][ j ] ( t ) = βo yiej + αΔM2[ i ][ j ] ( t - 1 )

(7.7)

and

ΔM1[ i ][ j ] ( t ) = βh xitj + αΔM1[ i ][ j ] (t - 1)

(7.8)

### C++ Implementation of a Backpropagation Simulator

The backpropagation simulator of this chapter has the following design objectives:

1.  Allow the user to specify the number and size of all layers.
2.  Allow the use of one or more hidden layers.
3.  Be able to save and restore the state of the network.
4.  Run from an arbitrarily large training data set or test data set.
5.  Query the user for key network and simulation parameters.
6.  Display key information at the end of the simulation.
7.  Demonstrate the use of some C++ features.

#### A Brief Tour of How to Use the Simulator

In order to understand the C++ code, let us have an overview of the functioning of the program.

There are two modes of operation in the simulator. The user is queried first for which mode of operation is desired. The modes are Training mode and Nontraining mode (Test mode).

Training Mode

Here, the user provides a training file in the current directory called training.dat. This file contains exemplar pairs, or patterns. Each pattern has a set of inputs followed by a set of outputs. Each value is separated by one or more spaces. As a convention, you can use a few extra spaces to separate the inputs from the outputs. Here is an example of a training.dat file that contains two patterns:

`      0.4 0.5 0.89           -0.4 -0.8      0.23 0.8 -0.3          0.6 0.34 `

In this example, the first pattern has inputs 0.4, 0.5, and 0.89, with an expected output of –0.4 and –0.8. The second pattern has inputs of 0.23, 0.8, and –0.3 and outputs of 0.6 and 0.34. Since there are three inputs and two outputs, the input layer size for the network must be three neurons and the output layer size must be two neurons. Another file that is used in training is the weights file. Once the simulator reaches the error tolerance that was specified by the user, or the maximum number of iterations, the simulator saves the state of the network, by saving all of its weights in a file called weights.dat. This file can then be used subsequently in another run of the simulator in Nontraining mode. To provide some idea of how the network has done, information about the total and average error is presented at the end of the simulation. In addition, the output generated by the network for the last pattern vector is provided in an output file called output.dat.

Nontraining Mode (Test Mode)

In this mode, the user provides test data to the simulator in a file called test.dat. This file contains only input patterns. When this file is applied to an already trained network, an output.dat file is generated, which contains the outputs from the network for all of the input patterns. The network goes through one cycle of operation in this mode, covering all the patterns in the test data file. To start up the network, the weights file, weights.dat is read to initialize the state of the network. The user must provide the same network size parameters used to train the network. C++ Neural Networks and Fuzzy Logic
ISBN: 1558515526
EAN: 2147483647
Year: 1995
Pages: 139

Similar book on Amazon