7.

[Cover] [Contents] [Index]

Page 103

(1994) and Ripley (1996) for additional reading on the topics covered by this chapter).

3.1 Multi-layer perceptron

The multilayer perceptron using the back-propagation learning algorithm (Rumelhart et al., 1986b) is one of the most widely used neural network models. A typical three-layer multilayer perceptron neural network is shown in Figure 2.10. As noted in Chapter 2, the left-most layer of neurones in Figure 2.10 is the input layer, which contains the set of neurones that receive external inputs (in the form of pixel values in the different bands of a multispectral image or other feature values). The input layer performs no computations, unlike the elements of the other layers. The central layer is the hidden layer (there may be more than one hidden layer in complex networks). The rightmost layer of neurones is the output layer, which produces the results of the classification. There are no interconnections between neurones in the same layer, but all of the neurones in a given layer are fully connected to the neurones in the adjacent layers. These interconnections have associated numerical weights, which are adjusted during the learning phase. The value held by each neurone is called its ‘activity’.

3.1.1 Back-propagation

The most popular algorithm used for updating the neuronal activities and the interconnection weights in a multilayer perceptron is the backpropagation algorithm. Back-propagation involves two major steps, those of forward and backward propagation, to accomplish its modification of the neural state. During training, each sample (for example, a feature vector associated with a single training pixel) is fed into the input layer, and the activities of the neurones are sequentially updated through the input layer to the output layer in terms of some mapping function. Once the forward pass is finished, the activities of the output neurones are compared with their expected activities. For example, if there are five classes and training pixel i is thought to belong to class 1, then the five output neurones (one per class) would be expected to output a pattern such as ‘1 0 0 0 0’. Except in very unusual circumstances, the actual output will differ from the expected outcome, and the differences are the network error. This error is distributed through the network by a backward pass, starting at output layer, which updates the weights. The process is similar to the distribution of closure error in a chain survey. The forward and backwards passes are continued until the network has ‘learned’ the characteristics of all the classes. This procedure is called ‘training the network’.

During the forward propagation process, the activities of neurones are

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net