3.1 Updating Knowledge


3.1 Updating Knowledge

I start by examining perhaps the simplest setting, where an agent's uncertainty is captured by a set W of possible worlds, with no further structure. I assume that the agent obtains the information that the actual world is in some subset U of W. (Ido not consider more complicated types of information until Sections 3.10 and 3.11.) The obvious thing to do in that case is to take the set of possible worlds to be W U. For example, when tossing a die, an agent might consider any one of the six outcomes to be possible. However, if she learns that the die landed on an even number, then she would consider possible only the three outcomes corresponding to 2, 4, and 6.

Even in this simple setting, three implicit assumptions are worth bringing out. The first is that this notion seems to require that the agent does not forget. To see this, it is helpful to have a concrete model.

Example 3.1.1

start example

Suppose that a world describes which of 100 people have a certain disease. A world can be characterized by a tuple of 100 0s and 1s, where the ith component is 1 iff individual i has the disease. There are 2100 possible worlds. Take the "agent" in question to be a computer system that initially has no information (and thus considers all 2100 worlds possible), then receives information that is assumed to be true about which world is the actual world. This information comes in the form of statements like "individual i is sick or individual j is healthy" or "at least seven people have the disease." Each such statement can be identified with a set of possible worlds. For example, the statement "at least seven people have the disease" can be identified with the set of tuples with at least seven 1s. Thus, for simplicity, assume that the agent is given information saying "the actual world is in set U", for various sets U.

Suppose at some point the agent has been told that the actual world is in U1, , Un. The agent should then consider possible precisely the worlds in U1 Un. If it is then told V, it considers possible U1 Un V. This seems to justify the idea of capturing updating by U as intersecting the current set of possible worlds with U.

But all is not so simple. How does the agent keep track of the worlds it considers possible? It certainly will not explicitly list the 2100 possible worlds it initially considers possible! Even though storage is getting cheaper, this is well beyond the capability of any imaginable system. What seems much more reasonable is that it uses an implicit description. That is, it keeps track of what it has been told and takes the set of possible worlds to be the ones consistent with what it has been told. But now suppose that it has been told n things, say U1, , Un. In this case, the agent may not be able to keep all of U1, , Un in its memory after learning some new fact V. How should updating work in this case? That depends on the details of memory management. It is not so clear that intersection is appropriate here if forgetting is allowed.

end example

The second assumption is perhaps more obvious but nonetheless worth stressing. In the example, I have implicitly assumed that what the agent is told is true (i.e., that the actual world is in U if the agent is told U) and that it initially considers the actual world possible. From this it follows that if U0 is the system's initial set of possible worlds and the system is told U, then U0 U ≠∅ (since the actual world is in U0 U).

It is not even clear how to interpret a situation where the system's set of possible worlds is empty. If the agent can be told inconsistent information, then clearly intersection is simply not an appropriate way of updating. Nevertheless, it seems reasonable to try to model a situation where an agent can believe that the actual world is in U and later discover that it is not. This topic is discussed in more detail in Chapter 9. For now, I just assume that the information given is such that the sets that arise are always nonempty.

The third assumption is that the way an agent obtains the new information does not itself give the agent information. An agent often obtains new information by observing an event. For example, he may learn that it is sunny outdoors by looking out a window. However, making an observation may give more information than just the fact that what is observed is true. If this is not taken into account, intersecting may give an inappropriate answer. The following example should help to clarify this point:

Example 3.1.2

start example

Suppose that Alice is about to look for a book in a room. The book may or may not be in the room and the light may or may not be on in the room. Thus, according to this naive description, there are four possible worlds. Suppose that, initially, Bob considers all four worlds possible. Assume for simplicity that if the book is in the room, it is on the table, so that Alice will certainly see it if the light is on. When Bob is told that Alice saw the book in the room, he clearly considers only one world possible: the one where the book is in the room and the light is on. This is obviously not the result of intersecting the four worlds he initially considered possible with the two worlds where the book is in the room. The fact that Alice saw the book tells Bob not only that the book is in the room, but also that the light is on. In this case, there is a big difference between Bob being told that Alice saw the book and Bob being told that the book is in the room (perhaps Alice remembered leaving it there).

If W is augmented to include a relative likelihood on worlds, then even the relative likelihood of worlds could change if the observation gives more information than just what is observed. For example, suppose that Bob initially thinks that the light in the room is more likely to be off than on. Further suppose that there may be some light from outdoors filtering through the curtain, so that it is possible for Alice to see the book in the room even if the light is off. After hearing that Alice saw the book, Bob considers only the two worlds where a book is in the room to be possible, but now considers it more likely that the light is on. Bob's relative ordering of the worlds has changed.

end example

The situation gets even more complicated if there are many agents, because now the model needs to take into account what other agents learn when one agent learns U. I defer further discussion of these issues to Chapter 6, where a model is provided that handles many agents, in which it is relatively straightforward to make precise what it means that an observation gives no more information than the fact that it is true (see Section 6.8).




Reasoning About Uncertainty
Reasoning about Uncertainty
ISBN: 0262582597
EAN: 2147483647
Year: 2005
Pages: 140

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net