31.

[Cover] [Contents] [Index]

Page 125

Thus, it is clear that the counter-propagation method takes the weights connecting from the winning neurone as output activities. Other weights connecting to losers remain unchanged. Note that the values of learning constants, α and β, should not to be too large. Values of 0.1 or 0.2 are generally used.

3.3.2 The training issue

Since the counter-propagation training algorithm does not contain a feedback flow from the output layer to the weights between the input and hidden layer, the links between the weights of the first two layers and the desired outputs will have a weaker relationship than those trained by backpropagation algorithm. As a result, it will involve a higher probability that the same winning neurone in the hidden layer is triggered by similar input patterns, even though the patterns actually belong to different classes. An improved version of the training algorithm to overcome this problem may include feedback to force the network to trigger different winner neurones in response to differences in input patterns (Reilly et al., 1982).

3.4 Hopfield networks

The Hopfield network (Hopfield, 1982, 1984) has mainly been exploited for auto-associative memory or solving optimisation problems, such as the well-known travelling salesman problem, and temporal feature tracking. Hopfield networks belong to the class of recurrent networks. In such networks, neuronal outputs are employed as inputs to feed back to the network itself. This property is quite different from the feed-forward networks described in previous sections, and may be thought to represent a kind of ‘memory’ in the network.

3.4.1 Hopfield network structure

A Hopfield network is constructed by interconnecting a large number of simple processing units. Each pair of units (i, j) is connected in both directions, i.e. from neurone i to j and from neurone j to i. A simple Hopfield network structure with three processing units is illustrated in Figure 3.12.

In general, the ith processing unit is described by two variables. These variables are the input (or current state) from other neurones, denoted by ui, and the output denoted by vi. The output value vi is generated in terms of some predefined output function f(ui) in order to limit the value of vi to the range [0, 1] or [−1, 1].

A total of n(n−1) interconnections is required for a Hopfield network containing n processing units. Let wji denote the weighted link from neurone i to j. The value of wji can be considered as a connection that assigns

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net