331.

[Cover] [Contents] [Index]

Page 97

an image is classified into two classes, and the first class covers 90% of the image area, then one can simply classify the whole image as class one, and obtain an overall classification accuracy of 90%. Although the above example is an extreme case, it points out the potential drawback of using overall classification accuracy as a measure for assessing classification performance.

In order to assess the accuracy of each information class separately, the concepts of producer’s accuracy and user’s accuracy can be used. For each information class i in a confusion matrix, the producer’s accuracy is calculated by dividing the entry (i, i) by the sum of column i, while the user’s accuracy is obtained by dividing the entry (i, i) by the sum of row i. Thus, the producer’s accuracy tells us the proportion of pixels in the test data set that are correctly recognised by the classifier. The user’s accuracy measures the proportion of pixels identified by the classifier as belonging to class i that agree with the test data. Using the data shown in Figure 2.13, the producer’s accuracy for each information class is calculated by:

 

while the user’s accuracy is determined as:

(2.27)

The class ‘wheat’ has the highest producer’s accuracy of 87.5%, and thus the producer of classification can estimate that this proportion of wheat pixels has been correctly classified. Classes ‘grass’ and ‘deciduous’ achieve only around 63% producer’s accuracy, which indicates that a considerable number of pixels belonging to these classes have been classified erroneously. Producer’s accuracy is really a measure of omission error.

User’s accuracy denotes the probability that a classified pixel actually represents that information class on the ground. For instance, if an information class called ‘potato’ has a user’s accuracy 100%, the user may infer that all the pixels classified as ‘potato’ are actually covered by potatoes, assuming that the sample of test data is adequate (Section 2.6.3). Therefore, from the example results given above, we know that most of the pixels labelled as grass on the classified image are actually grass. However,

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net