It is natural for scientific thinkers to try to apply rational methods to assess risk in everyday life. Should you get a flu shot, for example, if you’re under 40 and in good health? Should you jump out of an airplane (with a parachute)? The lofty goal of applying reason to risk assessment, however, is thwarted by two things: First, in the absence of certainty we typically make decisions based on a combination of gut-level instinct and expediency, and very often that seems to work fine; and second, we are constantly assailed by multiple, constantly changing random events. How Randomness Rules Our Lives was, in fact, the subtitle of a very instructive best-seller on randomness by Leonard Mlodinow. This constant buffeting by random forces is vividly illustrated by this delightful excerpt, paraphrased from a much longer 1964 children’s story called Fortunately by Remy Charlip, that anchors our first puzzle problem.
A man went on an airplane ride.
Unfortunately, he fell out.
Fortunately, he had a parachute on.
Unfortunately, the parachute did not open.
Fortunately, there was a haystack below him, directly in the path of his fall.
Unfortunately, there was a pitchfork sticking out of the top of the haystack, directly in the path of his fall.
Fortunately, he missed the pitchfork.
Unfortunately, he missed the haystack.
There have indeed been alleged instances of people surviving falls from planes by landing on haystacks, or even by falling into trees or thick shrubbery, as a quick online search will reveal. So the alternating screams inside this imaginary man’s head — “I’m dead!”/“I’m saved!” — are not conclusive until the sad finale. (Though our story seems to end tragically, in the original the protagonist survives with many more abrupt reversals of fortune!) Can principled methods of risk estimation be applied here? Given the available information, estimate his odds of survival at the end of each line above.
This story dramatically illustrates two important aspects of making probabilistic judgments: First, probabilities can change appreciably, even wildly, as new knowledge becomes available, and second, no matter how much you stack the odds in your favor, the final outcome crystallizes into a single result: life or death, yes or no. In rare instances, this may not be the favorable result that you expected. As with the collapse of the wave function in quantum mechanics, illustrated by Erwin Schrödinger’s famous thought experiment of a cat in a box that could be alive or dead, the probabilities cease to be meaningful after the event occurs. What, then, is the value of such calculations? Let’s examine this issue more closely.
Perhaps the best method of dealing rationally with the randomness and risk in our daily lives is Bayesian thinking, named after the 18th-century statistician Thomas Bayes. Bayesian thinking rests on a few important tenets. First, probability is interpreted subjectively as “credence” — a reasonable quantification of personal belief about the possibility of an outcome; second, when reliable frequency data are available, this credence must be made equal to the objective probability calculation; third, all relevant objective prior knowledge you have must be brought to bear on your initial estimate; and finally, you must update your probabilities in light of new information. If you always rely on the most reliable and objective “data-driven” probability estimates, keeping track of possible uncertainties, the final probability number you arrive at will be the best possible.
When confronted with a real-life medical decision over whether to treat his atrial fibrillation with a somewhat risky medical procedure that wasn’t guaranteed to succeed, the eminent mathematician Timothy Gowers resorted to a detailed risk-benefit calculation. Fortunately, it turned out well for Gowers, who is also a co-founder of the Polymath project. Unlike Gowers’ dilemma, however, most of the risks we confront are small, and the costs are not great. But the following problem illustrates the long-term benefit of taking a Bayesian approach.
The number of deaths on commercial airlines is about 0.2 deaths per 10 billion flight-miles. For driving, the rate is 150 deaths per 10 billion vehicle-miles. While this rate is about 750 times higher than for air travel, we still take long road trips because the absolute risks are small. But let us pursue a thought experiment using two hypothetical and admittedly unrealistic assumptions — first, that your expected life span is 1 million years (and you enjoy every year of it!), and second, that the above risks remain the same during those million years. Now imagine that every year you could either fly 10,000 miles or cover that distance by car over multiple road trips. The time spent is not a concern at all — after all, you have a million years to live! Under these conditions, by how many years and by what proportion would your life be shortened if you lived a million years and drove every time instead of flying? How would your answer differ for a more normal lifespan of 100 years?
What this demonstrates is that even if probability calculations become irrelevant after the event, prospectively they still give you the best chances over the long term. We don’t live a million years, but over the course of our lives we make tens of thousands of decisions about where and how to travel, what to eat, whether to exercise and so on. Even though the probable impact of each of these decisions on our longevity is small, their combined effect can be substantial. At the very least, for major decisions, such as which procedure to undergo for a serious medical condition, a detailed consideration beyond gut instincts is likely warranted.
And then, of course, there are well-defined situations where our gut instincts are demonstrably wrong. This is a staple of standard textbooks on Bayesian methods. One example is the pretty-good-but-not-perfect test procedure, which leads to our third question.
Here are two similar scenarios in which you have to make probability judgments. Before you make an exact calculation, hazard an intuitive guess and jot it down.
Variation A: A certain town has two ethnic groups, the Ones and the Twos. Ones make up 80 percent of the population. A hospital clinic conducts a standard, unbiased screening test for a rare disease that is equally common in both groups. It results in the collection of 100 blood samples, and sure enough, 80 of the samples come from Ones. On more rigorous testing, just one of the 100 samples is found to be positive for the disease. A researcher who is not privy to the ethnicity data because of HIPAA laws runs a test on this sample, which determines that it comes from a Two. However, this ethnicity-determining test is known to be only 75 percent accurate. What is the probability that the sample actually came from a Two?
Variation B: In this variation, Ones and Twos both make up 50 percent of the population, but Ones are more likely to have the rare disease. The same screening procedure as above collects 100 blood samples, again yielding 80 from Ones and 20 from Twos. The rest of the problem is exactly the same. Now what is the probability that the diseased sample actually came from a Two?
In which of the two cases was your intuition more accurate?
We know our intuitions often fail us in estimating probabilities, even though they may feel right at the time. They can even trip up experts, as evidenced by the brouhaha about the Monty Hall problem. As the dean of puzzle writers, Martin Gardner, once said, “In no other branch of mathematics is it so easy for experts to blunder as in probability theory.” Our third puzzle is an example of a problem type that is enabling psychology researchers to identify the kind of reasoning we use to come to our intuitive conclusions, and what causes them to be accurate or erroneous.
Readers are encouraged to comment about the ways in which they’ve used probability calculations in making real-life decisions, and what they think is the best approach for doing so.
See you soon (probably) to discuss new insights. Happy puzzling!
Note: This article was updated on Feb. 8, 2018, to use “flight-mile” units that better correspond to “vehicle-mile” units in comparing the risks of flying versus driving.
Editor’s note: The reader who submits the most interesting, creative or insightful solution (as judged by the columnist) in the comments section will receive a Quanta Magazine T-shirt. And if you’d like to suggest a favorite puzzle for a future Insights column, submit it as a comment below, clearly marked “NEW PUZZLE SUGGESTION.” (It will not appear online, so solutions to the puzzle above should be submitted separately.)
Note that we may hold comments for the first day or two to allow for independent contributions by readers.