9.2 Belief-Change Systems


9.2 Belief-Change Systems

It will be convenient for the remainder of this chapter to focus on a class of interpreted plausibility systems called belief-change systems, or just BCSs. BCSs will make it easier to relate the view of belief revision as conditioning with the more traditional view in the literature, where belief revision is characterized by postulates on beliefs as represented by sets of formulas. In BCSs, an agent makes observations about an external environment. As in the analysis of circuit-diagnosis problem, I assume that these observations are described by formulas in some logical language, since this is the traditional approach. I further assume that the agent does not forget, so that her local state can be characterized by the sequence of observations she has made. (This is in the spirit of one of the ways taken to model the Listener-Teller protocol in Section 6.7.1. Recall that when this is not done, conditioning is not appropriate.) Finally, I assume that the agent starts with a prior plausibility on runs, so that the techniques of Section 6.4 can be used to construct a plausibility assignment. This means that the agent's plausibility at a given point can essentially be obtained by conditioning the prior plausibility on her information at that point. These assumptions are formalized by conditions BCS1 and BCS2, described later.

To state these conditions, I need some notation and a definition. The notation that I need is much in the spirit of that introduced in Section 6.8. Let = (, , π) be an interpreted plausibility system and let Φ be the set of primitive propositions whose truth values are determined by π. Given a formula φ Prop(Φ), let [φ] consist of all runs r where φ is true initially; given a local state = o1, , ok, let [] consist of all runs r where the agent is in local state at some point in r. Formally,

A primitive proposition p depends only on the environment state in if π(r, m)(p) = true iff π(r, m)(p) = true for all points (r, m) such that re(m) = re(m). Note that in the case of the interpretations used to capture the circuit-diagnosis problem, all the primitive propositions in Φdiag depend only on the environment in both 1 and 2.

is a belief-change system if the following two conditions hold:

  • BCS1. There exists a set Φe Φ of primitive propositions that depend only on the environment state such that for all r R and for all m, the agent's local state is ra(m) = o(r, 1), , o(r, m), where o(r,k) e = Prop(Φe) for 1 k m. (I have used ra to denote the agent's local state rather than r1, to stress that there is a single agent.)

  • BCS2. is an interpreted SDP system. Recall that means that there is a prior conditional plausibility measure Pla on the runs in and that the agent's plausibility space at each point is generated by conditioning, using the techniques described in Section 6.4. Moreover, the prior conditional plausibility space (, , , P1a) has the following properties:

    • Pla satisfies CPl5, Pl4, and Pl5 (i.e., Pla satisfies CPl5 and for all sets U of runs in , Pl(.| U) satisfies Pl4 and Pl5);

    • [] for all local states such that [] ;

    • [] for all φ e;

    • if U and Pla(V | U) > , then V U .

BCS1 says that the agent's observations are formulas in e and that her local state consists of the sequence of observations she has made. Since BCS1 requires that the agent has made m observations by time m, it follows that her local state effectively encodes the time. Thus, a BCS is a synchronous system where the agent has perfect recall. This is quite a strong assumption. The perfect recall is needed for conditioning to be appropriate (see the discussion in Sections 3.1 and 6.8). The assumption that the system is synchronous is not so critical; it is made mainly for convenience. The fact that the agent's observations can all be described by formulas in e says that e may have to be a rather expressive language or that the observations are rather restricted. In the case of an agent observing a circuit, Φe = Φdiag, so I implicitly assumed the latter; the only observations were the values of various lines. However, in the case of agents observing people, the observations can include obvious features such as eye color and skin color, as well as more subtle features like facial expressions. Even getting a language rich enough to describe all the gradations of eye and skin color is nontrivial; things become much harder when facial expressions are added to the mix. In any case, e must be expressive enough to describe whatever can be observed. This assumption is not just an artifact of using formulas to express observations. No matter how the observations are expressed, the environment state must be rich enough to distinguish the observations.

BCS2 says that the agent starts with a single prior plausibility on all runs. This makes the system an SDP system. It would also be possible to consider systems where the set of runs was partitioned, with a separate plausibility measure on each set in the partition, as in Section 6.9, but that would complicate the analysis. The fact that the prior satisfies Pl4 and Pl5 means that belief in behaves in a reasonable way; the fact that it also satisfies CPl5 means that certain natural coherence properties hold between various conditional plausibility measures. The assumption that [] is the analogue of the assumption made in Section 6.4 that μ,i(r,m,i) > 0; it makes it possible to define the agent's plausibility measure at a point (r, m) to be the result of conditioning her prior on her information at (r, m).

Just as in the case of probability (see Exercise 6.5), in an (interpreted) SDP plausibility system, the agent's plausibility space at each point satisfies the SDP property. Moreover, if the prior satisfies Pl4 and Pl5, so does the plausibility space at each point (Exercise 9.5). It follows from Proposition 8.2.1(b) that the agent's beliefs depend only on the agent's local state. That is, at any two points where the agent has the same local state, she has the same beliefs. I use the notation (, sa) Bφ as shorthand for (, r, m) Bφ for some (and hence for all) (r, m) such that ra(m) = sa. The agent's belief set at sa is the set of formulas that the agent believes at sa, that is,

Since the agent's state is a sequence of observations, the agent's state after observing φ is simply sa φ, where is the append operation. Thus, Bel(, sa φ) is the belief set after observing φ. I adopt the convention that if there is no point where the agent has local state sa in system , then Bel(, sa) consists of all the propositional formulas over Φe. With these definitions, the agent's belief set before and after observing φ—that is, Bel(, sa) and Bel(, sa φ)—can be compared. Thus, a BCS can conveniently express (properties of) belief change in terms of formulas. The agent's state encodes observations, which are formulas in the language, and there are formulas that talk about what the agent believes and how the agent's beliefs change over time.

There is one other requirement that is standard in many approaches to belief change considered in the literature: that observations are "accepted", so that after the agent observes φ, she believes φ. This requirement is enforced by the next assumption, BCS3. BCS3 says that observations are reliable, so that the agent observes φ only if the current state of the environment satisfies φ.

BCS3. (, r, m) o(r,m) for all runs r and times m.

Note that BCS3 implies that the agent never observes false. Moreover, it implies that after observing φ, the agent knows that φ is true. A system that satisfies BCS1–3 is said to be a reliable BCS.

It is easy to check that 1 and 2 are both reliable BCSs.




Reasoning About Uncertainty
Reasoning about Uncertainty
ISBN: 0262582597
EAN: 2147483647
Year: 2005
Pages: 140

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net