| C++ Neural Networks and Fuzzy Logic |
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526 Pub Date: 06/01/95
|Previous||Table of Contents||Next|
When you input the coordinates of the vertex G, which has 1 for each coordinate, the first hidden-layer neuron aggregates these inputs and gets a value of 2.2. Since 2.2 is more than the threshold value of the first neuron in the hidden layer, that neuron fires, and its output of 1 becomes an input to the output neuron on the connection with weight 0.6. But you need the activations of the other hidden-layer neurons as well. Let us describe the performance with coordinates of G as the inputs to the network. Table 5.7 describes this.
|Hidden Layer||Weighted Sum||Comment||Activation||Contribution to Output||Sum|
The weighted sum at the output neuron is 0.6, and it is greater than the threshold value 0.5. Therefore, the output neuron fires, and at the vertex G, the function is evaluated to have a value of +1.
Table 5.8 shows the performance of the network with the rest of the vertices of the cube. You will notice that the network computes a value of +1 at the vertices, O, A, F, and G, and a 1 at the rest.
|Hidden Layer Neuron#||Weighted Sum||Comment||Activation||Contribution to Output||Sum|
|O :0, 0, 0||1||0||<1.8||0||0|
|A :0, 0, 1||1||0.2||<1.8||0||0|
|B :0, 1, 0||1||1||<1.8||0||0|
|C :0, 1, 1||1||1.2||<1.8||0||0|
|D :1, 0, 0||1||1||<1.8||0||0|
|E :1, 0, 1||1||1.2||<1.8||0||0|
|F :1, 1, 0||1||2||>1.8||1||0.6|
*The output neuron fires, as this value is greater than 0.5 (the threshold value); the function value is +1.
Many important neural network models have two layers. The Feedforward backpropagation network, in its simplest form, is one example. Grossberg and Carpenters ART1 paradigm uses a two-layer network. The Counterpropagation network has a Kohonen layer followed by a Grossberg layer. Bidirectional Associative Memory, (BAM), Boltzman Machine, Fuzzy Associative Memory, and Temporal Associative Memory are other two-layer networks. For autoassociation, a single-layer network could do the job, but for heteroassociation or other such mappings, you need at least a two-layer network. We will give more details on these models shortly.
Kunihiko Fukushimas Neocognitron, noted for identifying handwritten characters, is an example of a network with several layers. Some previously mentioned networks can also be multilayered from the addition of more hidden layers. It is also possible to combine two or more neural networks into one network by creating appropriate connections between layers of one subnetwork to those of the others. This would certainly create a multilayer network.
You have already seen some difference in the way connections are made between neurons in a neural network. In the Hopfield network, every neuron was connected to every other in the one layer that was present in the network. In the Perceptron, neurons within the same layer were not connected with one another, but the connections were between the neurons in one layer and those in the next layer. In the former case, the connections are described as being lateral. In the latter case, the connections are forward and the signals are fed forward within the network.
Two other possibilities also exist. All the neurons in any layer may have extra connections, with each neuron connected to itself. The second possibility is that there are connections from the neurons in one layer to the neurons in a previous layer, in which case there is both forward and backward signal feeding. This occurs, if feedback is a feature for the network model. The type of layout for the network neurons and the type of connections between the neurons constitute the architecture of the particular model of the neural network.
|Previous||Table of Contents||Next|
Copyright © IDG Books Worldwide, Inc.