320.

[Cover] [Contents] [Index]

Page 87

class are accumulated, and the winner is that pixel that has the greatest accumulated probability. As in the case of majority voting, a threshold may be set so that the winning margin is a clear one.

Evidential reasoning is described in Section 7.4. In essence, the method associates a degree of belief with each source of information, and a formal system of rules is used in order to manipulate the belief function. In the context of pattern recognition, this method is useful in handling multiple sources of data of different kinds and with different levels of accuracy. It can also be used to assess the plausibility of labels assigned to a given pixel by different classifiers.

Wilkinson et al. (1995) feed the output from several classifiers into an artificial neural net, which is trained to produce a single class label as output from the possibly several different labels output by the multiple classifiers. In a sense, these authors propose that the classifiers’ output is itself classified.

If the results from several classifiers are to be amalgamated, then the classifiers should be independent. To be independent, these classifiers must each use an independent feature set or be trained on separate sets of training data. One possibility is to use subsets of the features for each separate classifier, or to use random sampling of the training data to generate p different subsets of training data, where p is the number of classifiers. Cross-validation is another possibility; the test data are subdivided into a number n of approximately equally sized subsets, and (n−1) of these subsets are used to train the classifier. The remaining nth subset is used for estimating the error associated with the labelling. Each of the n subsets is used in turn for testing, with the remaining (n−1) subsets being used for training. Once this cycle of training and testing is completed then the error estimates are combined (Schaffer, 1993).

Further elaboration of these multiple classifier approaches can be found in Kanellopoulos and Wilkinson (1997), Roli et al. (1997), Tumer and Ghosh (1995); Wilkinson et al. (1997), Wolpert (1992) and Xu et al. (1992).

2.5 Incorporation of ancillary information

The addition of ancillary (non-spectral) information such as spatial relationships may be a more powerful way of characterising the classes of interest and, if added to the spectral information for image classification, the classification accuracy may be improved. Ancillary information can be extracted directly from an image or from other sources such as digital elevation models, geological and soil maps. Some examples of ancillary features derived from an image are texture, context and structural relationships.

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net