[Cover] [Contents] [Index] |
Page 277
The model described in this section is based on Lee et al. (1987). In the general multisource case, a set of observations from n (n>1) different sources is used. Let xi, i [1, n], denote the measure of a specific pixel from measurement source i. The aim is to derive the probability of the pixel belonging to each of c information classes ωj, j [1, c]. It is possible to get some information about the prior probability of class ωj, denoted by P(ωj), i.e. the probability that an observation will be a member of class ωj. A useful model for prior probability is context, as described in Chapter 6. The approach using contextual assumption to perform multisource classification is considered later.
According to Bayesian probability theory, the law of conditional probabilities states that:
(7.1) |
where P(ωj| x1, x2…, xn) is known as the conditional or posterior probability that ωj is the correct class, given the observed data vector (x1, x2, …, xn). P(x1, x2,…, xn|ωj) is the probability density function (p.d.f.) associated with the measured data (x1, x2,…, xn) given that (x1, x2,…, xn) are members of class ωj. P(x1, x2,…, xn) is the p.d.f. of data (x1, x2,…, xn). Assuming class conditional independence among the sources, one obtains P(x1, x2,…, xn|ωj)=P(x1|ωj)·P(x2|ωj)·…·P(xn|ωj), and Equation (7.1) becomes:
(7.2) |
Again, following the law of conditional probabilities,
(7.3) |
If Equation (7.3) is substituted into Equation (7.2), one obtains:
(7.4) |
If the intersource independence assumption has been made such that P(x1)·P(x2)·…·P(x1)=P(x1, x2,…, xn), then Equation (7.4) results in:
[Cover] [Contents] [Index] |