Chapter 12: Final Words


Overview

Last words are for people who haven't said anything in life.

—Karl Marx

"Reasoning about uncertainty" is a vast topic. I have scratched only the surface in this book. My approach has been somewhat different from that of most books on the subject. Given that, let me summarize what I believe are the key points I have raised.

  • Probability is not the only way of representing uncertainty. There are a number of alternatives, each with their advantages and disadvantages.

  • Updating by conditioning makes sense for all the representations, but you have to be careful not to apply conditioning blindly.

  • Plausibility is a way of representing uncertainty that is general enough to make it possible to abstract the key requirements on the representation needed to obtain properties of interest (like beliefs being closed under conjunction).

  • There are a number of useful tools that make for better representation of situations, including random variables, Bayesian networks, Markov chains, and runs and systems (global states). These tools focus on different issues and can often be combined.

  • Thinking in terms of protocols helps clarify a number of subtleties, and allows for a more accurate representation of uncertainty.

  • It is important to distinguish degrees of belief from statistical information and to connect them.

A number of issues that I have touched on in the book deserve more attention. Of course, many important technical problems remain unsolved, but I focus here on the more conceptual issues (which, in my opinion, are often the critical ones for many real-world problems).

The problem of going from statistical information to degrees of belief can be viewed as part of a larger problem of learning. Agents typically hope to build a reasonable model of the world (or, at least, relevant parts of the world) so that they can use the model to make better decisions or to perform more appropriate actions. Clearly representing uncertainty is a critical part of the learning problem. How can uncertainty best be represented so as to facilitate learning? The standard answer from probability theory is that it should be represented as a set of possible worlds with a probability measure on them, and learning should be captured by conditioning. However, that naive approach often fails, for some of the reasons already discussed in this book.

Even assuming that the agent is willing, at least in principle, to use probability, doing so is not always straightforward. For one thing, as I mentioned in Section 2.1, choosing the "appropriate" set of possible worlds can be nontrivial. In fact, the situation is worse than that. In large, complex domains, it is far from clear what the appropriate set of possible worlds is. Imagine an agent that is trying to decide between selling and renting out a house. In considering the possibility of renting, the agent tries to consider all the things that might go wrong. There are some things that might go wrong that are foreseeable; for example, the tenant might not pay the rent. Not surprisingly, there is a clause in a standard rental agreement that deals with this. The art and skill of writing a contract is to cover as many contingencies as possible. However, there are almost always things that are not foreseeable; these are often the things that cause the most trouble (and lead to lawsuits, at least in the United States). As far as reasoning about uncertainty goes, how can the agent construct an appropriate possible-worlds model when he does not even know what all the possibilities are. Of course, it is always possible to have a catch-all "something unexpected happens." But this is probably not good enough when it comes to making decisions, in the spirit of Section 5.4. What is the utility (i.e., loss) associated with "something unexpected happens"? How should a probability measure be updated when something completely unexpected is observed? More generally, how should uncertainty be represented when part of the uncertainty is about the set of possible worlds?

Even if the set of possible worlds is clear, there is the computational problem of listing the worlds and characterizing the probability measure. Although I have discussed some techniques to alleviate this problem (e.g., using Bayesian networks), they are not always sufficient to solve the problem.

One reason for wanting to consider representations of uncertainty other than probability is the observation that, although it is well known that people are not very good at dealing with probability, for the most part, we manage reasonably well. We typically do not bump into walls, we typically do not get run over crossing the street, and our decisions, while certainly not always optimal, are also typically "good enough" to get by. Perhaps probability is simply not needed in many mundane situations. Going out on a limb, I conjecture that there are many situations that are "robust", in that almost any "reasonable" representation of uncertainty will produce reasonable results. If this is true, then it suggests that the focus should be on (a) characterizing these situations and then (b) finding representations of uncertainty that are easy to manipulate and can be easily used in these situations.

Although I do not know how to solve the problems I have raised, I believe that progress will be made soon on all of them, not only on the theoretical side, but on building systems that use sophisticated methods of reasoning about uncertainty to tackle large, complex real-world problems. It is an exciting time to be working on reasoning about uncertainty.




Reasoning About Uncertainty
Reasoning about Uncertainty
ISBN: 0262582597
EAN: 2147483647
Year: 2005
Pages: 140

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net