53.

[Cover] [Contents] [Index]

Page 145

output neurones, respectively, forming the mapping cortex, were investigated. The reason for investigating the effect of using a different number of output neurones is that the SOM classification mechanism is a concept similar to clustering and labelling. Each output neurone denotes a cluster and we wish to examine the effect of increasing the number of output neurones (i.e. clusters) on classification accuracy. Similar experiments were performed using counter-propagation and fuzzy ARTMAP networks. The initial learning rate was set to 0.1, and 8,000 iterations were arbitrarily chosen for the first stage unsupervised training (note that the number of iterations must be sufficiently large to allow the SOM to converge). The unsupervised training times for SOM candidates (a), (b), and (c) are around 50, 200 and 900 CPU minutes, respectively. A supervised training process using the learning vector quantisation (LVQ) algorithm was then applied. For SOM candidates (a) and (b), the process converged after 100 iterations which took around 10 CPU minutes, while in the case of SOM candidate (c), it converged after 200 iterations which took around 40 CPU minutes to complete.

Three counter-propagation networks with 49, 144 and 625 hidden neurones, respectively (the same number as used in the three SOM networks), were tested. The learning rate parameters α and β in Equations (3.21) and (3.23) were both set to 0.2. All of the counter-propagation networks converged quickly, requiring around 500 iterations, and the corresponding running times were around 6 CPU minutes for the network containing 49 hidden neurones, and 15 and 130 CPU minutes for the networks containing 144 and 625 hidden neurones, respectively.

Three fuzzy ARTMAP networks with different vigilance parameters were investigated. For all three of these networks, the parameter α in Equation (3.45) was chosen as 0.01, the mapfield vigilance parameter ρab in Equation (3.55) was set to 1.0 to guarantee that the input patterns and desired output category show the tightest linkage. In the first fuzzy ARTMAP network, the learning rates βa and βb for the fuzzy ART systems F2a and F2b (Figure 3.18) were set to 1.0 (i.e. fast learning). The vigilance values ρa and ρb were set to 0.8, and a total 487 neurones (clusters) in F2a was triggered. In the second fuzzy ARTMAP network, the learning rates βa and βb were again set to 1.0 while the vigilance values ρa and ρb were set to 0.97, and a total of 987 neurones in F2a was generated. In the third fuzzy ARTMAP network, the learning rates βa and βb were set to 0.2. The vigilance values ρa and ρb were set to 0.99, and a total of 2,039 neurones in F2a were triggered. The training stages for all of the tested fuzzy ARTMAP networks were very fast. The networks converged within 3 CPU minutes for the first and the second fuzzy ARTMAP candidates. The third fuzzy ARTMAP took around 20 CPU minutes.

The resulting classification accuracies are shown in Table 3.2. The best classification images (in terms of overall classification accuracy) derived

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net