18.

[Cover] [Contents] [Index]

Page 113

2|3|2|2 network with only sixteen weights. Figures 3.4d and e illustrate the decision boundaries formed by the two networks. It is apparent that these decision boundaries are affected by the nature of the training data. The decision boundary shown in Figure 3.4e is clearly the best choice.

Both examples above indicate that the number of hidden layers and the number of nodes per hidden layer that are required to achieve satisfactory classification performance are case-dependent. Although there are studies analysing the bounds on the number of hidden neurones and samples (e.g. Huang and Huang, 1991; Mehrotra et al., 1991), the assumptions are all quite strict. Methods that are more practical are described below.

3.1.4 Over-training and network pruning

During the training phase, the state of the network is iteratively updated in order to decrease the error between the actual and the desired output vectors, as described above. If the training data contain noise or are incomplete then, after a certain number of iterations, the network may be misled by these noisy training data elements. If a slightly different set of samples is used to test the network, the accuracy might be worse, even though the training error may be less. This issue is known as over-training, and an example is given in Figure 3.5.

Figure 3.5 During the early stage of training, the test error may follow the same decreasing trend as the training error. However, after a certain number of iterations, the test error may start to rise due to the over-training effect.

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net