Hack71.Think About Frequencies Rather than Probabilities


Hack 71. Think About Frequencies Rather than Probabilities

Probability statistics are particularly hard to think about correctly. Fortunately you can make it easier by presenting the same information in a way that meshes with our evolved capacity to reason about how often things happen.

Mark Twain once said, "People commonly use statistics like a drunk uses a lamppost: for support rather than for illumination."1 Things haven't changed. It's strange, really, given how little people trust them, that statistics get used so much.

Our ability to think about probabilities evolved to keep us safe from rare events that would be pretty serious if they did happen (like getting eaten) and to help us learn to make near-correct estimates about things that aren't quite so dire and at which we get multiple attempts (like estimating the chances of finding food in a particular part of the valley for example). So it's not surprising that, when it comes to formal reasoning about single-case probabilities, our evolved ability to estimate likelihood tends to fail us.

One example is that we overestimate low-frequency events that are easily noticed. Just ask someone if he gets more scared traveling in a car or by airplane. Flying is about the safest form of transport there is, whether you calculate it by miles flown or trips made. Driving is pretty risky in comparison, but most people would say that flying feels like the more dangerous of the two.

Another thing we have a hard time doing is accounting for the basic frequency at which an event occurs, quite aside from the specific circumstances of its occurrence on the current occasion. Let me give an example of this in action . . .

7.3.1. In Action

This is a famous demonstration of how hard we find it to work out probabilities. When it was published in Parade magazine in 1990, the magazine got around 10,000 letters in response92% of which said that their columnist, Marilyn vos Savant, had reached the wrong conclusion.2 Despite the weight of correspondence, vos Savant had reached the correct conclusion, and here's the confusing problem she put forward, based roughly on the workings of the old quiz show Let's Make a Deal presented by Monty Hall.

Imagine you're a participant on a game show, hoping to win the big prize. The final hoop to jump through is to select the right door from a choice of three. Behind each door is either a prize (one of the three doors) or a booby prize (two of the doors). In this case, the booby prizes are goats.

You choose a door.

To raise the tension, the game-show host, Monty, looks behind the other doors and throws one open (not yours) to reveal a goat. He then gives you the choice of sticking with your choice or switching to the remaining unopened door.

Two doors are left. One must have a goat behind it, one must have a prize. Should you stick, or should you switch? Or doesn't it matter?

This is not a trick question, like some lateral thinking puzzles. It's the statistics that are tricky, not the wording.


Most people get this wrongeven those with formal mathematics training. Many of the thousands who wrote to Marilyn vos Savant at Parade were university professors who were convinced that she had got it wrong and insisted she was misleading the nation. Even the famous Paul Erdos, years before the Parade magazine incident, had got the answer wrong and he was one of the most talented mathematicians of the century (and inspiration for Erdos numbers, which you may have heard of3).

The answer is that you should switchyou are twice as likely to win the prize if you switch doors than if you stick with your original door. Don't worry if you can't see why this is the right answer; the problem is famous precisely because it is so hard to get your head around. If you did get this right, try telling it to someone else and then explaining why switching is the right answer. You'll soon see just how difficult the concepts are to get across.

7.3.2. How It Works

The chance you got it right on the first guess is 1 in 3. Since by the time it comes to sticking or switching, the big prize (often a car) must be behind one of the two remaining doors, there must be a 2 in 3 chance that the car is behind the other door (i.e., a 2 in 3 chance your first guess was wrong).

Our intuition seems compelled to ignore the prior probabilities and the effect that the game show host's actions have. Instead, we look at the situation as it is when we come to make the choice. Two doors, one prize. 50-50 chance, right? Wrong. The host's actions make switching a better bet. By throwing away one dud door from the two you didn't choose initially, he's essentially making it so that switching is like choosing between two doors and you win if the prize is behind either of them.

Another way to make the switching answer seem intuitive is to imagine the situation with 1000 doors, 999 goats, and still just one prize. You choose a door (1 in 1000 chance it's the right door) and your host opens all the doors you didn't choose, which have goats behind them (998 goats). Stick or switch? Obviously you have a 999 in 1000 chance of winning if you switch, even though as you make the choice there are two doors, one prize, and one goat like before. This variant highlights one of the key distractions in the original problemthe host knows where the prize is and acts accordingly to eliminate dud doors. You choose without knowing where the prize is, but given that the host acts knowing where the prize is, your decision to stick or switch should take that into account.

Part of the problem is that we are used to thinking about probabilities as things attached to objects or events in simple one-to-one correspondence. But probabilities are simply statements about what can be known about uncertain situations. The probabilities themselves can be affected by factors that don't actually affect the objects or events they label (like base rates and, in this case, the game show host's actions).

Evolutionary psychologists Leda Cosmides and John Tooby4 argue that we have evolved to deal with frequency information when making probability judgments, not to do abstract probability calculations. Probabilities are not available directly to perception, whereas how often something happens is. The availability of frequencies made it easier for our brains to make use of them as they evolved. Our evolved faculties handle probabilities better as frequencies because this is the format of the information as it is naturally present in the environment. Whether something occurs or not can be easily seen (is it raining or is it not raining, to take an example), and figuring out a frequency of this event is a simple matter of addition and comparison: comparing the number of rainy days against the number of days in spring would automatically give you a good idea whether this current day in spring is likely to be rainy or not. One-off probabilities aren't like this; they are a cultural inventionand like a lot of cultural inventions, we still have difficulty dealing with them.

The idea that we are evolved to make frequency judgments, not probability calculations, is supported by evidence that we use frequencies as inputs and outputs for our likelihood estimates. We automatically notice and remember the frequency of events (input) and have subjective feelings of confidence that an event will or will not occur (output).

If you rephrase the Monty Hall problem in terms of frequencies, rather than in terms of a one-off decision, people are more likely to get it right.5 Here's a short version of the same problem, but focusing explicitly on frequencies rather than one-off probabilities. Is it easier to grasp intuitively?

Take the same routine as beforethree doors, one prize, and two duds. But this time consider two different ways of playing the game, represented here by two players, Tom and Helen. Tom always chooses one door and sticks with it. Helen is assigned the other two doors. Monty always lets out a goat from behind one of these two doors, and Helen gets the prize if it is behind the remaining door. They play the game, say, 30 times. How often is it likely Tom will win the prize? How often is it likely Helen will win the prize? Given this, which is the better strategy, Tom's (stick) or Helen's (switch)?

7.3.3. In Real Life

An example of an everyday choice that is affected by our problems with probabilities is thinking about weather forecasts. It can be simultaneously true that the weather forecasts are highly accurate and that you shouldn't believe them. The following quote is from a great article by Neville Nicholls about errors and biases in our commonsense reasoning and how they affect the way we think about weather prediction:6

The accuracy of the United Kingdom 24-hour rain forecast is 83%. The climatological probability of rain on the hourly timescale appropriate for walks is 0.08 (this is the base rate). Given these values, the probability of rain, given a forecast of rain, is 0.30. The probability of no rain, given a forecast of rain, is 0.70. So, it is more likely that you would enjoy your walk without getting wet, even if the forecast was for rain tomorrow.

It's a true statement but not easy to understand, because we don't find probability calculations intuitive. The trick is to avoid them. Often probability statistics can be equally well-expressed using frequencies, and they will be better understood this way. We know the probabilities concerning base rates will be neglected, so you need to be extra careful if the message you are trying to convey relies on this information. It also helps to avoid conditional probabilitiesthings like "the probability of X given Y"and relative risks"your risk of X goes down by Y% if you do Z." People just don't find it easy to think about information given in this way.7

7.3.4. End Notes

  1. Or at least it's commonly attributed to Mark Twain. It's one of those free-floating quotations.

  2. vos Savant, M. (1997). The Power of Logical Thinking. New York: St Martin's Press.

  3. Paul Erdos published a colossal number of papers in his lifetime by collaborating with mathematicians around the world. If you published a paper with Erdos, your Erdos number is 1; if you published with someone who published with Erdos, it is 2. The mathematics of these indices of relationship can be quite interesting. See "The Erdos Number Project," http://www.oakland.edu/enp.

  4. Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58(1), 1-73.

  5. Krauss, S., & Wang, X. T. (2003). The psychology of the Monty Hall problem: Discovering psychological mechanisms for solving a tenacious brain teaser. Journal of Experimental Psychology: General, 132(1), 3-22.

  6. Nicholls, N. (1999). Cognitive illusions, heuristics, and climate prediction. Bulletin of the American Meteorological Society, 80(7), 1385-1397 (http://ams.allenpress.com/pdfserv/i1520-0477-080-07-1385.pdf).

  7. Gigerenzer, G., & Edwards, A. (2003). Simple tools for understanding risks: From innumeracy to insight. British Medical Journal, 327, 741-744 (http://bmj.bmjjournals.com/cgi/reprint/327/7417/741). This article is great on ways you can use frequency information as an alternative to help people understand probabilities.

7.3.5. See Also

  • A detailed discussion of the psychology of the Monty Hall dilemma, but one that doesn't focus on the base-rate interpretation highlighted here is given by Burns, B. D., & Wieth, M. (in press). The collider principle in causal reasoning: Why the Monty Hall dilemma is so hard. Journal of Experimental Psychology: General. More discussion of the Monty Hall dilemma and a simulation that lets you compare the success of the stick and switch strategies is at http://www.cut-the-knot.org/hall.shtml.



    Mind Hacks. Tips and Tools for Using Your Brain
    Mind Hacks. Tips and Tools for Using Your Brain
    ISBN: 596007795
    EAN: N/A
    Year: 2004
    Pages: 159

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net