23.

[Cover] [Contents] [Index]

Page 118

learning mechanism. Several studies have investigated the effects of varying the values of the input parameters for both γ and α (e.g. Ritter and Schulten, 1988; Bavarian and Lo, 1991; Erwin et al., 1992a, b). The general conclusion is that if a suitable value of γmax is chosen such that the neighbourhood function covers the whole mapping cortex, then the SOM network will be likely to terminate in a well-ordered state. These authors also note that the learning rate α should be large (of the order of 0.1) at the beginning of training and should decrease during the training process. At the end of training, the value of α may be as small as, for example, 0.01.

Other extensions to the SOM approach have been proposed. The use of a hierarchical clustering scheme is described by Furrer et al. (1994). Another extension is the use of an additional layer (called the Grossberg layer) to achieve supervised training as proposed by Cappellini and Chiuderi (1994). This method is described in Section 3.3.

At the end of the unsupervised training phase, the SOM can characterise the distribution of input samples, and thus generate a two-dimensional view of the multidimensional input features (Schaale and Furrer, 1995). However, it should be noted that one should not use the results of this stage of the SOM to perform pattern recognition or other decision processes, because there is a considerable difference between feature mapping and detecting clusters. It can be shown that the SOM can be further trained, e.g. by the use of the learning vector quantisation (LVQ) algorithm, to increase recognition accuracy (Kangas et al., 1990; Pal et al., 1993). Specifically, after the first stage of unsupervised training, each information class will generally activate several neurones on the SOM mapping cortex. The purpose of the subsequent supervised labelling phase is to tune the network weights and to define the boundaries of these information classes in order to reduce misclassification error.

3.2.1.2 Supervised training

The supervised labelling algorithm introduced here is based on the concept of majority voting using the LVQ algorithm. The mapping cortex neurones are initially labelled using the training set. Each training pattern is input to the SOM and the selected (i.e. winning) neurone is determined by choosing the minimum Euclidean distance between the training pattern and the weights (Equation (3.11)). For instance, for a certain neurone a on the mapping cortex, if a was triggered r and s times by the training patterns for class 1 and 2, respectively, then neurone a will be labelled as class 1 if r>s, and class 2 otherwise. This is the concept of majority voting.

If the winning neurone matches the desired output (i.e. the class allocated to neurone a is the same as the corresponding training pattern class), then the corresponding weight is adjusted to make it close to the input feature

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net