28.

[Cover] [Contents] [Index]

Page 122

3.3 Counter-propagation networks

Counter-propagation networks can be regarded as an extension of the SOM network. Three layers make up a counter-propagation network. The first two layers (input and competitive layers) perform the same functions as the two layers of the SOM (Section 3.2). The third layer (called the Grossberg layer) is fully connected to the competitive layer. Compared to a SOM, such a network structure provides an improved means of supervised training. Hence, the inputs to this kind of network contain both input patterns and the corresponding output vector. The structure of a counter-propagation network may appear to be similar to that of a multilayer perceptron. However, we should treat them as two different kinds of networks because, unlike a multilayer perceptron, a counter-propagation network involves only one hidden layer, and the training rules are quite different.

The term ‘counter-propagation’ indicates the purpose for which this type of network was originally designed. It uses an auto-associative memory (i.e. using an input pattern to recall a predefined desired output pattern). In Figure 3.10, the input and output layers have each been divided into two parts in order to deal with the input vector pair (A, B) and output pair (A′, B′). The activation flow is as follows: the input vector A from the left of the input layer will recall output vector B′ on the right of the output layer, and input vector B will be associated with output pattern A′ on the left of the output layer. The activation flow is diagonal, hence this type of network is called a counter-propagation network.

3.3.1 Counter-propagation network training

The counter-propagation network has a particular property: only those weights connecting to the winning neurone are updated. Other weight values remain unchanged. This is a quite different strategy from the one used in the multilayer perceptron. An example is shown in Figure 3.11, which illustrates a winning neurone (shaded) located in the hidden layer. Only those weights (shown in bold) connecting to this neurone from both the input and output layers are adjusted.

A counter-propagation network is thus made up of two kinds of network: a SOM network and a Grossberg layer. The algorithm used for updating the network weights during the training stage is described below.

The algorithm for identifying the winning neurone in the hidden layer requires a normalisation process in which both the input vector and the weights connecting the input and hidden layers are normalised to a length of 1. More specifically, let denote an input vector, and denote the weight set from the input layer to hidden neurone j. This algorithm requires that:

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net