We are used to the idea that perception can be ambiguous — there are visual illusions such as the famous Necker cube that can be perceived in two completely different ways. We accept that both perceptions are equally valid and that it is fruitless to debate which one is “right.” Most of us imagine that such radically different views of the same object cannot occur in the realm of mathematics; after all, we are taught to think that every problem has a single correct answer. As we saw in “The Slippery Eel of Probability,” this is not always the case when the technique to be used in solving the problem is not given. For this month’s Insights puzzle, we consider a famous problem that has divided people across the board and generated endless debate: the Sleeping Beauty problem.

The famous fairy-tale princess Sleeping Beauty participates in an experiment that starts on Sunday. She is told that she will be put to sleep, and while she is asleep a fair coin will be tossed that will determine how the experiment will proceed. If the coin comes up heads, she will be awakened on Monday, interviewed, and put back to sleep, but she won’t remember this awakening. If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday, again without remembering either awakening. In either case, the experiment ends when she is awakened on Wednesday without being interviewed.

Whenever Sleeping Beauty is awakened and interviewed, she won’t know which day it is or whether she has been awakened before. During each awakening, she is asked: “What is your degree of certainty that the coin landed heads?” What should her answer be?

If you’re having trouble picturing the problem, Julia Galef explains it nicely in this video:

This problem has intuitively appealing solutions that are so entrenched that they have been given names: the **thirder** position and the **halfer** position. Before we review these, remember that in the Bayesian view of probability, the degree of subjective certainty is constantly updated by new knowledge. Thus, if we knew nothing about a coin toss except that the coin was fair, our subjective probability of it being heads is one-half. But if a hundred reliable witnesses tell us that it was heads, or we see a video of the event, our subjective probability can change from one-half to one.

Let’s see how the thirders and halfers apply this notion to the above problem.

Thirders argue that in the universe of possibilities, there are three possible situations in which Sleeping Beauty could have been awakened, which are indistinguishable to her. The coin could have come up heads and it is Monday, the coin could have come up tails and it is Monday, or the coin could have come up tails and it is Tuesday. Each of these is equally likely from her perspective, so the probability of each is one-third. So her subjective probability that the coin came up heads is one-third.

“Not so fast!” cry the halfers. Since the coin was fair, the chance that it came up heads is half. Sleeping Beauty has received no new information about the result of the coin toss when she is woken up. So her subjective probability that the coin came up heads should continue to be half.

Think about both of these positions, and let us know what position the Necker cube of your mind lands on. Do add your thoughts to the animated debate on this question, which is already extensive, as a Google search will demonstrate. Below we present two variations on this classic problem. In these versions, you have to make a life-or-death decision based on what you believe. But first, here’s a curious coincidence — the type that makes people go, “What are the chances of that?” On January 11, the Insights team decided to feature the Sleeping Beauty problem this month. The next day we discovered that January 12 is the birthday of Charles Perrault, the original author of the Sleeping Beauty story. Perrault was born in Paris on January 12, 1628. Feel free to weigh in on how often such surprising and rare coincidences have happened to you; I have expressed my views on this phenomenon in a suite of puzzles elsewhere.

OK, now it’s time to put your life on the line. Here are two new variations of the intriguing Sleeping Beauty problem, in a brilliant scenario created by *Quanta* reader eJ, which I felt needed to be better known and discussed. In both of these variations, you are Sleeping Beauty, and you are awakened based on the result of the coin toss and made to forget this fact, as before. However, the experiment is being performed by evil alien scientists who have kidnapped you, and they ask you to make what turns out to be a life-or-death choice. Now your subjective probability is not just a hypothetical concept: It actually affects your chances of coming out of the situation alive. The correct choice will maximize your chance of survival. Will you stick to your original conclusion, or will you let your mental Necker cube flip?

Variation 1:

Upon each awakening, Sleeping Beauty is presented with two bags of beans, marked “H” and “T.” She is instructed to reach into one bag, grab a single bean, and put it aside. At the end of the experiment, she will have to eat the bean or beans that she has pulled. She is told that the bags are filled with identical looking jellybeans (J) or poisoned pills (K), as follows:

* If the coin came up heads, bag H has 7J, and bag T has 7K.

* If the coin came up tails, bag H has 1J and 6K, while bag T has 6J and 1K.

You are Sleeping Beauty. Which bag would you pick, and what are your chances of survival?

Variation 2:

You, as Sleeping Beauty, are told that you have to go through the original experiment (without the beans) every week for many months, and the memory of each waking will be wiped from your memory. The evil chief scientist has determined that on your hundredth awakening in this series of experiments, you will be presented with the two bags of beans and instructed exactly as in Variation 1 above. If you pick a poisoned pill, you will die; otherwise, you will go free. Now which bag do you pick, and what are your chances of surviving?

Kudos to eJ for making the Sleeping Beauty scenario concrete in such an interesting fashion. They say the thought of death, even in imagination, focuses the mind like nothing else. Does it work that way for you? Happy puzzling, and may the insight be with you!

*Editor’s note: The reader who submits the most interesting, creative or insightful solution (as judged by the columnist) in the comments section will receive a *Quanta Magazine* T-shirt. ( Update: The solution is now available here.) And if you’d like to suggest a favorite puzzle for a future Insights column, submit it as a comment below, clearly marked “NEW PUZZLE SUGGESTION” (it will not appear online, so solutions to the puzzle above should be submitted separately). *

*Note that we will hold comments for the first day to allow for independent contributions.*

To me, the Sleeping Beauty Dilemma is very similar to the Monty Hall problem that is familiar to many people. In fact, I believe that a slight variation on the Monty Hall would produce the same problem in Sleeping Beauty and give us insight into (what I feel) is the correct answer.

At the outset of the traditional MH problem, a player has a 1/3 chance of choosing the correct door. Once that door is chosen another door (one of the remaining two) is opened showing no prize. The player is then asked to stay with their initial choice or to move to the other, unopened door. While it may seem at first that the chance of choosing the right door is 1/3, that probability is increased to 2/3 because of the additional knowledge you subtlety gain by the reveal after the first choice. This is the core of Bayseian probability, any additional information can alter the calculation of probability.

In the case of Sleeping Beauty though, there is no additional information that Sleeping Beauty has gained upon waking up because of her strange, selective amnesia. All she knows is that she is awake at one of three possible moments. The chance therefore of her selecting the day accurately is 1/3.

For the first variation I would select the H bag on every occasion. This produces a probability of survival choosing H is 25/49 while the probability of survival choosing T is 18/49. The interesting thing here is that it is an 'and' questions and an 'or' question. That's the best way I could describe it anyway.

Thanks! I'll tune in again!

Hello,

In sleeping beauty's subjective world there's no "Monday" nor "Tuesday", just "awakened" or "not awakened". She has no crucial informations from the day. If the coin is fair, halfers are right.

I am on team halfer.

For variation 1:

If I always pick H, I have a 1/2 probability of dying and an expectation of 2/3 jelly beans.

If I always pick T, I have a 2/3 probability of dying and an expectation of 5/6 jelly beans.

What I would do depends on how much I like these particular jelly beans and how much I dislike dying. Let's suppose that I like jelly beans and dislike dying. If the jelly beans were sufficiently delicious, the expectation of an extra 1/6 jelly bean might make it worth the extra probability of death and I would always pick T. Otherwise I would always pick H. This variation does not change my opinion of the dilemma because it is primarily about the utility, not the probability, of particular outcomes.

For variation 2:

If I pick H, I have a 3/7 chance of dying and an expectation of 4/7 jelly beans.

If I pick T, I have a 4/7 chance of dying and an expectation of 3 jelly beans.

Again, what I pick depends on how much I like these particular jelly beans and how much I like death. If I like the jelly beans enough relative to an increased chance of dying, T would be the way to go. Otherwise H. If I were suicidal after all these months of captivity and liked the jelly beans, T would be the easy choice. If I were suicidal and disliked the jelly beans, my choice would depend on how suicidal I was and how much I disliked these jelly beans. Again this variation does not change my opinion of the dilemma because it is primarily about the utility, not the probability, of particular outcomes.

If she is being interviewed then it is either Monday or Tuesday. One might argue that the prior probability that the coin landed heads is .5 and the prior probability that it is Monday is .5 so the prior probability that the coin landed heads and it is Monday is .25. The prior probability that the coin landed tails is .5 and the prior probability that it is Monday is .5 so the prior probability that the coin landed tails and it is Monday is .25. The prior probability that the coin landed tails is .5 and the prior probability that it is Tuesday is .5 so the prior probability that the coin landed tails and it is Tuesday is .25. The prior probability that the coin landed heads and it is Tuesday is 0. So are the other three scenarios really equally probable at .25 each given that the fourth scenario has been ruled out ab initio and the probabilities don't sum to 1? Because heads/Tuesday was ruled out and the coin was only to be flipped once sleeping beauty is faced with the following possibilities: as she is being interviewed it must be Monday or Tuesday and she must allow for both possibilities. If Monday then the odds of heads are .5, the odds of tails are .5 and there will be no waking on Tuesday. If Tuesday then the odds of heads are 0, the odds of tails are 1 and there was a waking on Monday which she does not remember. Setting the odds of it being Monday at .5 and the odds of it being Tuesday at .5 then the odds of the coin having come up heads are .5 times .5 plus .5 times 0 = .25; and the odds of it having come up tails are .5 times .5 plus .5 times 1 = .75. So both the thirders and the halfers are wrong. Thirders are wrong because ruling out the possibility of heads/Tuesday does not leave the other three combinations as equally probable, and halfers are wrong because they leave out of account the implications of a Tuesday interview for a person who must allow for its possibility.

The halfers are wrong. The fact that sleeping beauty has been awakened and is being interviewed is itself "new information". It is information that makes it slightly more likely that she is on the "tails" timeline. So the answer is one-third.

Greetings puzzle enthusiasts! Your comments will be open for viewing later this evening.

Quanta reader eJ, who created the variation problems, clarifies that

the bean bags are refilled to their standard quotas of jelly beans and poison pills each time Sleeping Beauty is awakened. Specifically, on the Tuesday awakening, the bags contain exactly the same number of Js and Ks as they did on the Monday awakening, regardless of which bean was pulled out on Monday.I'm going to argue a solidly thirder position here.

The crux of the halfer position is that sleeping beauty gained no new knowledge when she woke up. But this is not true: In waking, she gained the knowledge that she is awake. She is twice as likely to be awake if the coin came up tails, and so she should bargain on that position.

If you are still not convinced, take a simpler case: On heads, sleeping beauty is never woken before the end of the experiment, on tails, she is woken once. Now consider what she should guess if awakened. Of course, she should say tails, as that is the only possible outcome if she is awake. Fair coin toss or not, the odds on this are 100%. The exact same principles are at play in the original scenario, this is just slightly obfuscated by the chance it could be either outcome.

With this in mind, we can calculate our odds of survival in the first variation. If we presume that the sleeping beauty has taken the thirder position, then she should always pick bag… H. Yes, really. This is not because the thirder position is wrong. It is merely because she is more likely to die if the coin was tails rather than heads.

Let's start by calculating the likelihood of dying if she selects bag T. If she does this and the coin was heads, she has a 100% chance of drawing K and dying. If she does this and the coin was tails, then she has to pick from the bag 2 separate times. For the first draw, she has a 1 in 7 chance of drawing a K. If we assume that the aliens replaced the bean/pill after the second drawing, then she also has a 1 in 7 chance of selecting a poison pill on the second round. Based on this, there is a 1 in 7 chance she dies from the pill from the first round and a 6 in 7 chance in surviving, after which she would eat the next bean and have a 1 in 7 chance of dying. Thus, for our total probability of dying if the coin is tails, we have 1/7 + 6/7 * 1/7. This would be 13/49, or about 26.5%. Now we weight our two outcomes (100% chance of dying on heads and 26.5 percent chance on tails) on the coin toss, which, even in our thirder position, is still a fair 50%: 1/2 * 1 + 1/2 * 13/49, giving us a 63.3% chance of death. Compare this to selecting bag H each time, which would result in a 0% chance of death if the coin was an H and a 98% chance of dying on a tails, giving a cumulative probability of death of 49% — notably less than 63.3%.

To see what went wrong, we can simplify the problem. Give sleeping beauty her guess: if it is right, she lives, if it is wrong, she dies. Doing out the math quickly, you can see she has a 50% chance of dying regardless of her choice, so long as she is consistent. But let's presume that, instead of making a choice and sticking with it, sleeping beauty flips her own coin each time she awakes, and says the result. In this scenario, she would have a 62.5% chance of death — this is because she could be right the first round but wrong the second. This in where the flaw in this model comes to light: you can only die once. If, instead of death, there was some sort of repeatable punishment (say, loosing an arm, or getting a painful electric shock) then the penalty of selecting heads and being wrong is doubled. In this scenario, someone guessing tails each time would lose an average of half an arm, somebody guessing heads each time would lose an average of 1 arm, and someone guessing randomly would lose 3/4ths of an arm. This is why, in general, the thirder model is the correct one. A non-repeatable punishment basically makes it so that the second day doesn't count no long as if you are consistent, leading to the 50% odds.

Now, onto the final variation in the problem. At first glance, one may be tempted to say that there are 50/50 odds of the coin being heads or tails on the hundreth awakening, but this is not the case. Look at the possible scenarios leading to the hundreth awakening: there could be 99 awakenings prior to the coin toss, and the coin was heads, there could be 99 awakenings prior and the coin was tails, or there could be 98 awakenings prior and the coin was tails. Based on this, it is actually twice as likely that the coin was tails than heads. If we go back and re-weight our probabilities from the original beans scenario to reflect this, we get a 51.0% chance of dying if you select bag T and a 65.3% chance if you select bag H, making bag T the proper choice.

I am solidly a halfer.

The variations on the sleeping beauty dilemma do not sway me because they muddy probability (of the state of the coin) with utility (how much I desire each state).

In variation 1, if I pick H every time, I have a 24/49 chance of dying and an expectation of 63/98 jelly beans. If I pick T every time, I have a 31/49 chance of dying and an expectation of 64/98 jellybeans. What I would chose depends on how much I like or dislike dying and how much I like or dislike jellybeans. Even if I dislike dying, if the jellybeans are delicious enough that an expectation of 1/98 additional jellybean outweighs the 1/7 additional chance of dying, I would pick T every time. Otherwise I would pick H. My utility over outcomes does not change my subjective estimate of the state of the coin. This estimate should not change because of the state of my preferences.

Similarly, in varation 2 if I pick H my probability of dying is 3/7 and I have an expectation of 4/7 jellybeans. If I pick T, my probability of dying is 4/7 and I have an expectation of jellybeans 3/7. If I like jellybeans and not dying, I should pick H. If I dislike jellybeans and like dying I should pick T. If I like jellybeans and dying, or dislike jellybeans and dying, what I should pick depends on how much I like or dislike them.

In any case, my utility over outcomes should not change my probability estimate.

The problem with the Sleeping Beauty Dilemma in general is that Beauty's utility function is not defined. Does she want to be right as many times as possible or does she want the highest percentage of right answers? Another way to look at it is whether she is rewarded for a right answer or punished for a wrong one. If she is rewarded for right answers, then she's better off choosing tails. For instance, if she is given $100 for each right answer given, then she stands to gain more money if she chooses tails (50/50 getting $200 vs 50/50 getting $100). On the other hand, if she is put to death for any wrong answer given, then she can do no better than 50/50.

Thus, as far as I see it, the reason it's a dilemma is just that we don't know Beauty's motivation for answering. Of course, the variations give us that motivation…

To make the problem a little more general, I assume that Beauty is given access to a random number generator (e.g. she gets a fair coin of her own that she's allowed to flip as many times as she wants when making her decision). Thus, with probability H, she will choose heads (and with probability 1-H, she will choose tails). Note that setting H to 1 or 0 is akin to always choosing heads or tails respectively.

VARIATION 1

———–

So, what happens if it's a Tuesday and Beauty has already taken a bean out of a bag? Won't she see that one is less full? The experimenter can set it up so she can't, so the question is really whether the probabilities can change on Tuesday. I'll assume they can't, because that seems more reasonable to me. Let's now examine Beauty's probability of success:

If the experimenter flips heads, then Beauty will survive will probability H.

If the experimenter flips tails, then Beauty will make two (independent) queries of her random number generator. She only survives if she selects a J bean both times. Each time, her chance of selecting a J bean is H*1/7 + (1-H)*6/7. Because these are independent, we can get her total chance of survival by squaring this value:

(H*1/7 + (1-H)*6/7)^2

= (H/7 + 6/7 – 6H/7)^2

= (6/7 – 5H/7)^2

= 36/49 – 60H/49 + 25H^2/49

Because the experimenter's coin is fair, we can multiply both of these values by 0.5 and add them to get the probability that Beauty survives the experiment:

(36/49 – 60H/49 + 25H^2/49)/2 + H/2

= 25H^2/98 – 11H/98 + 36/98

We want to find the value of H (between 0 and 1) that maximizes this sum, so in order to find critical points, we take the derivative and set it to 0:

2 * (25H/98) – 11/98 = 0

50H – 11 = 0

H = 11/50

Subbing in, we get a value of: 0.355 (a minimum)

We can also look at our edge points. Setting H to 1 yields a probability of success of 50/98, while setting H to 0 yields a probability of success of 36/98.

Clearly, Beauty should always choose heads.

VARIATION 2

———–

This is slightly different than variation 1 for two reasons. First, Beauty will only have to eat one bean because we are only concerned with one interview. Second, because this is the hundredth interview, we have to figure out whether we are in a singleton heads day or one of either of the two tails days.

The first of these differences is easy to deal with.

If the experimenter has most recently flipped heads, Beauty lives with probability H.

If the experimenter has most recently flipped tails, Beauty lives with probability (6/7 – 5H/7).

But, what is the probability that the experimenter has most recently flipped heads? For the 100th interview exactly, I don't know. However, for a random interview, it is twice as likely that the experimenter has most recently flipped tails than heads. If we take this as a good approximation for the 100th one, then the chance of success is:

H/3 + (6/7 – 5H/7) * 2/3

= (7H + 12 – 10H)/21

= (4 – H)/7

Clearly, the highest chance of success is to set H=0, or in other words, always pick from the T bag.

A naive sample space for this experiment might be written

W = {Hm, Tm, Tt},

where H = heads, m = monday, t = tuesday.

However, the three sample points are not equiprobable and also because Tm automatically entails Tt—we cannot have Tt without Tm. This sample space therefore violates the basic notion of independence of the elements of a sample space and is wrong. The true sample space is just {T, H}. Therefore P(H) = 1/2.

I think there are two reasons this problem is confusing.

A) We are confusing the question of "What is the probability that a head appeared?" with the question "What is the probability that today is a Monday?" Knowing nothing else except she was waken up, she should answer P(H) = 1/2. This is similar to the old problem of the twins. We are told that a colleague has a twin and that one is a girl what, then, is the probability that our colleague has twin girls? The answer is 2/3. However, if one of the twins shows up, we see that she is a girl, and we are asked "What is the probability that the other twin is also a girl?" then the answer is 1/2 since the other child could be equally likely a girl or a boy. It all depends on what the question is.

In the same way, if Sleeping Beauty is asked, "What is the probability that today is a Monday" then she should answer 2/3.

B) We are confused because we conflate what we know with what Sleeping Beauty knows. This is similar to me putting a million dollars in my pocket and asking Sleeping Beauty what is the probability that I have a million dollars in my pocket. Because Sleeping Beauty knows nothing about my ruse she should rightfully answer using what she knows about the probability of typical experimenters having a million dollars in their pockets. In this case she should answer "very small."

The fact that Sleeping Beauty is wrong does not contradict what we know should be the correct probability (ie P() = 1.0) since probability is about those things we have no complete knowledge of. If we know something is factually true or false, then there is no need for us to answer in probabilistic terms.

I wrote the same essential argument twice because I initially thought the beans were not replaced after the first draw.

In the original problem, Beauty is asked: "what is your degree of certainty [credence] that the coin landed Heads?". The notion of credence — which surely is some kind of subjective probability — can be very hard to pin down.

Consider variation 1. If your credence for Heads is 1/3 then you believe with 2/3 certainty that the bags are filled the Tails way, making bag T (6J+1K if Tails, 1J+6K if Heads) the more attractive choice. So why is it that some Thirders favour bag H? Is "credence" not such a clear-cut concept after all?

eJ has thrown the gauntlet to thirders to explain why they would choose the H bag in variation 1 if their subjective probability of heads is 1/3.

Similarly, I would like halfers to address JohnBal's excellent counter-example: "Take a simpler case. On heads, Sleeping Beauty is never woken before the end of the experiment, on tails, she is woken once. Now consider what she should guess if awakened. Of course, she should say tails, as that is the only possible outcome if she is awake. Fair coin toss or not, the odds on this are 100%."

If halfers can update their subjective probability in the above case (as they must), why don't they want to do it in the original Sleeping Beauty problem?

Let's interpolate between the original experiment and JohnBal's. SB(q), for 0<=q<1 has a flat-random number 0<=R<1 drawn on Monday; the wakeup happens on Monday if and only if R<q. So SB(1) is the original problem and SB(0) is JohnBal's variant.

The sample space is: HM=(Heads,R<q), HX=(Heads,R>=q), TM=(Tails,R<q), TX=(Tails,R>=q), with respective probabilities q/2, (1-q)/2, q/2, (1-q)/2. Upon being interviewed, HX is ruled out (it's the one case with no interview), leaving Pr(Heads) = Pr(HM) / (Pr(HM)+Pr(TM)+Pr(TX)) = q / (1+q). In the original problem, q=1 and so Pr(Heads)=1/2. In JohnBal's variant, q=0 and so Pr(Heads)=0, as required.

JohnBal's variant is quite similar to Roger White's in his paper, "The generalized Sleeping Beauty problem: a challenge for thirders" (http://web.mit.edu/rog/www/papers/sleeping_beauty.pdf). Funny how there it was invoked on behalf of, rather than against, the Halfer camp!

I am with the halfers. Clearly, if Sleeping Beauty could remember being interviewed before, she would know it must be Tuesday and that the coin came up tails. However, her amnesia prevents her from ever making this deduction.

On each day she awakens it could be either Monday or Tuesday. The fact that if the coin had landed tails she will be asked the same question twice does not affect the probability to her of it actually being heads or tails. Therefore, with no knowledge of which day she is being asked on, the probability remains a half.

Let's extend the experiment to a whole year. If heads, she is interviewed on January 1st, then remains asleep for the rest of the year. If tails, she is interviewed every day of the year. She is then awakened on January 1st of the next year, without being interviewed. Should the fact that she may be asked this question 365 times, or just once affect anything about her answer?

I guess the thirders would have to become three hundred and sixty sixers. It's either January 1st, with heads, January 1st with tails, or any of the other days in the year with tails. So her subjective probability that the coin came up heads is now 1/366. By lengthening the time of the experiment even more we could make the subjective probability of heads as close to zero as we like.

Apologies, this is a corrected version of my comment:

VARIANT 2

Let p be the probability that the 100th waking will be after flipping heads. A reasonable approximation is to assume p = 1/3 but we will investigate more accurate estimations.

Further let's assume that Sleeping Beauty can answer non-deterministically where x is the probability of choosing heads. Her probability of surviving is therefore:

p * (chance of surviving a heads final toss) + (1-p) * (chance of surviving a tails final toss)

= px + (1-p)(x/7 +(1-x)6/7)

= (12p – 5)x/7 +(1-p)6/7

Unlike variation 1, this is linear in x and the gradient is (12p-5)/7, so we should always choose tails (set x=0) if the gradient is negative (p < 5/12 = 0.41666…) and always choose heads (set x=1) if the gradient is positive (p > 0.41666…).

So how can we estimate p more precisely? One simple method is with a monte-carlo simulation of the game (see code below). By running millions of sample experiments, each stopping on the 100th waking we see that p is approximately 0.333646. This is slightly higher than our naive estimation of 1/3 but since it does not approach the threshold 0.41666… this does not modify the optimal strategy of always choosing tails.

Is it possible to exceed the threshold by stopping the experiment after fewer that 100 wakings? Yes but unfortunately this only happens if we stop after just 1 waking in which case p = 0.5.

Finally, how could we modify the game to make these kinds of considerations necessary?

MY VARIANT 3

Let y be the proportion of jellybeans in bag H and poisoned pills in bag T (in the case of tails being thrown). In the original version, y = 1/7. Following a similar analysis we find that the critical gradient is now p + (1 – p)(2y – 1). For this to be close to zero when p is close to 1/3 we require y to be close to 1/4. So, if our new game has 4 beans in each bag instead of 7, and in the case of tails there's still 1 jellybean in the H bag and 1 poison pill in the T bag then our strategy is more interesting since now the critical threshold is p<1/3 for tails and p>1/3 for heads. As before, if we stop on the 100th waking p is unchanged at 0.333646. However, this now exceeds the threshold so we should choose heads. Furthermore if we stop on the 99th waking our monte-carlo simulation shows that p is approximately 0.33331 so we must choose tails!

Here's some javascript code to run the simulation for some stop wake number.

var experiment = function(stop_wake) {

var wake = 0;

while(1) {

if(Math.random() < 0.5) {

if(++wake === stop_wake)

return true;

} else {

if(++wake === stop_wake)

return false;

if(++wake === stop_wake)

return false;

}

}

}

var stops = 0;

var stop_heads = 0;

while(1) {

if(experiment(100))

++stop_heads;

++stops;

if(!(stops%1000000))

console.log(stop_heads / stops);

}

The question feels kinda like a fait question to me.

(probably related to the fact that I'm not a math or number wizard.)

Do you (as sleeping beauty) go with statistical probability data to predict your current state, or do you go with the practical fact that you don't know what the random coin-toss did.

>On heads, Sleeping Beauty is never woken …

That seems to be breaking the rule that being awakened is intended to NOT pass on any state (heads or tails) information.

I'm also firmly a halfer, and I think Variation 1 expresses quite well why this is the right solution. Let's do what von Neumann did to solve a similar problem (the Final Problem of Sherlock Holmes), and make our choice random. Let's assign a probability p to us choosing a bean from the H bag, and probability (1-p) to the T bag.

The chance that a head is thrown is 1/2, and in that case we have a probability of (1-p) of choosing poison. The chance that a tail is thrown is also 1/2, and in that case we could choose poison on our first awakening with probability q:= (6/7)p + (1/7)(1-p), or choose non-poison and then poison on the second awakening with probability (1-q)q. So the chance we die is (1/2)(1-p) + (1/2)(q+(1-q)q). This is minimized for 0<= p <= 1 when p=1. So we want to choose the heads bag always. [This is assuming, as claimed above, that the bags are identical on both awakenings if tails is thrown. Things change slightly if the bags are not refilled.]

Regarding the second variant, I believe that JohnBal's counter-example fails immediately. He claims that there is no difference between a "Heads=1 awakening vs. Tails=2 awakenings" scenario vs. "Heads=0 awakenings vs. Tails=1 awakening". But there is an *a priori* difference in Sleeping Beauties' knowledge. In the second scenario, she knows beforehand that if she is awakened then Tails happened. That is NOT the case in the first scenario.

The first variant is pretty clearly just a halfer/thirder question. Halfers say that picking heads is the better choice, giving a 25/49 chance of survival, while Thirders would argue that picking tails is safer with a 4/7 survival rate. As stated at the beginning of the puzzle, there isn't a clear answer. I personally see a little more sense in the Halfer approach because the coin flip is a single random event, regardless of what the sleeper knows.

The second variant, however, vindicates the thirders because there actually are 3 possible scenarios: the conventional ones where the most recent flip was after the 99th awakening (head or tails), or one where the most recent flip was after the 98th awakening and was tails. However, this alone doesn't give us an answer about the best solution to pick, since we don't know the probability of each scenario. Since the coin is fair, we can figure this out by counting the number of ways each scenario could occur.

We only flip after an awakening if we've finished the procedure for the pervious flip (1 awakening for heads, 2 awakenings for tails), so counting the number of ways this can happen after n awakenings is the same as counting the number of ways to travel a distance of n units in steps of 1 and 2. This, however, is the (n+1)st term in the well known Fibonacci sequence.

If F(k) denotes the kth Fibonacci number, there are F(99+1) scenarios in which a flip occurs after the 99th awakening, and F(98+1) scenarios in which a flip occurs after the 99th awakening. Thus, in total there are 2*F(100)+F(99) = 927372692193078999176 possible scenarios. Thus after the 100th awakening, sleeping beauty's odds are as follows:

P(heads is right answer) = F(99)/(2*F(100)+F(99)),

P(Tails is right answer) = (F(100)+F(99))/(2*F(100)+F(99)).

Then, we only need to include the probability of picking a non-poison bean to find the right answer:

P(Survival if answering 'heads') = (7/7)F(99)/(2*F(100)+F(99)) + (1/7)(F(100)+F(99))/(2*F(100)+F(99)),

P(Survival if answering 'tails') = (0/7)F(99)/(2*F(100)+F(99)) + (6/7)(F(100)+F(99))/(2*F(100)+F(99)).

These probabilities round off to 0.324359 and 0.529743, respectively, and so the best bet is answering tails, though the odds are barely above 50/50. Hats off to the evil chief scientist.

P.S. I forgot to give a solution to the second problem. Let r denote the probability that Sleeping Beauty was awakened the 100th time after a heads flip. This means that the first 99 awakenings there were created by i tail flips and 99-2i head flips, as i ranges from 0 to 49). Thus r = sum_{i=0}^{49} (1/2^{99-2i}) Binomial(99-i,i). This number is extremely close to 1/3 (as one might expect).

Proceeding as I did above, by choosing bag H with probability p and bag T with probability 1-p, we see that the chance of death is r(1-p)+(1-r)((6/7)p+(1/7)(1-p)), and minimized when p=0. So one should always choose the T bag.

Instead of asking Sleeping Beauty what her certainty is of heads or tails, we can ask her to guess which one ocurred. Furthermore, we can ask her to write down a commitment to her choice before she is even put to sleep. After all, she is not given any new information when she is awakened.

Imagine you are Sleeping Beauty, and you are about to go to sleep. If you write down heads, and tails is flipped, you will be wrong twice: on Monday and Tuesday. If you write down tails, and tails is wrong, you will be wrong only once: on Monday. Knowing this, you could write down tails and minimize the likelihood of being wrong *in your interview answer(s).* There would still be a 50-5o chance you were wrong on paper, but there would be a 2-1 chance you will be correct in your interviews.

I have thus provided a trivial restatement of the thirder position. Linguistically, it removes the distraction of a temporal shift, upon which a red herring is introduced into the halfer position. The halfer position equivocates the decision on paper with the reporting of the decision in interviews. While the decision on paper is 50-50, the reporting in interviews is not.

I think the confusion arises because the phrase “degree of certainty” doesn’t have a clear meaning. We need to more clearly define the problem. To do this, let’s consider two interpretations associated with the two responses:

HALFER INTERPRETATION: SB is drugged by psychopathic statisticians, and, after sleeping some unknown number of days, she is roused from sleep. Her captors say: 'A fair coin was tossed while you were sleeping. If it landed heads, today is definitely Monday. If it landed tails, a second coin was tossed. If that landed heads, today is definitely Monday. If it landed tails, today is definitely Tuesday.'" They then ask her, in her drug-addled state, "What is the probability that the first coin was heads?" SB gets no information from the fact that she has been woken up, so it doesn't matter if she is asked twice if the first coin landed tails but only once if the first coin landed heads. She must treat the question as simply one about a fair coin. In this case, the answer is clearly one-half.

THIRDER INTERPRETATION: SB is kidnapped by psychopathic statisticians. For each of 100 weeks, they perform the following insane procedure. They flip a coin. If it comes up heads, they wake her on Monday and ask her how the coin landed. If it lands tails, they wake her Monday and Tuesday and ask her how the coin landed. SB never remembers past wakenings. At the end of the whole affair they count up how often she was correct. For an expected 50 weeks, the coin will have been heads, and also 50 tails. But since tails means two askings, there will have been 100 times when tails was correct and only 50 times when heads was correct. In other words, heads is correct one-third of the time.

The trouble with the Halfer Interpretation is that it relies on the assumption that the question is asked once. However, on a given week, she might be asked twice, and that fact contains information. SB is uncertain about what day it is. Given the fact that SB is being asked, there is a one-in-three possibility that it is Tuesday, and she will only be asked on Tuesday if the coin landed Tails. So, when if it is Monday, Heads and Tails are equally likely, SOMETIMES IT WILL BE TUESDAY, and on Tuesday, Tails is always the right answer. So an answer must account for the likelihood that it is Tuesday.

To do it all math-like:

Pr(heads now) = Pr(Monday)*Pr(heads) + Pr(Tuesday)*Pr(heads) = (2/3)*(1/2) + (1/3)*0 = 1/3

The halfer and thirder positions are answering slightly different questions: what is SBs belief the coin-toss event came up heads? vs what is SBs belief the awakening event happens in a situation the coin came up heads?

Before, during and after the experiment SB- like every first year student- knows there is a 50% chance a coin TOSS came up heads. However, before, during and after the experiment SB knows she will be awakened twice as often under the tails timeline, so there is only a 1/3 chance she will be AWAKENED given a heads toss.

Like most verbal paradoxes, it relies on underspecifying the precise question!

@Pradeep Mutalik, JohnBal

The correct answer to the the sleeping beauty problem is 1/2.

The probability of the coin showing heads or tails does not change with the number of awakenings of SB. Saying that it does means sampling bias.

In your "counter-example" where SB is not awakened at all if the coin shows heads the halfers do not have to update their probability because they knew already in advance that "waking up = coin shows tails".

Author's update@Dennis, eJ (and other halfers)

Re. John Bal's problem: Both Dennis and eJ have shown how you get from a subjective certainty (credence) of one-half for tails to being absolute certain of tails on being woken. Of course you are right – it is trivial to do so – but that was not my question. My point was that you have updated your credence: you have gone from being halfers (for tails) to correctly being one-hundred percenters. On Monday, in John Bal's problem, you are

nothalfers any more – your credence for tails has been updated from 1/2 to 1.So my question is, why won't you update your credence in the original SB problem? After all, thirders were also halfers on Sunday. They updated their subjective certainty when they were woken up, based on the new knowledge "I am now awake", and the fact that the odds of this happening are twice as likely if the coin had come up tails. From halfers, they have now become thirders. Their new position gives the objectively correct answer: they will be right 2/3rds of the time if they say "tails" and only 1/3rd of the time if they say "heads."

Why are you okay with updating in John Bal's problem but not in the SB problem?

@All thirders (including myself!)

The beauty of eJ's two variations is that the utility function is perfectly adjusted to exactly mirror your subjective position. In both cases, if you are a halfer, you should pick from the H bag. If you are a thirder, you should reach for the T bag.

However, there is an objectively correct answer that maximizes your chance of survival in both cases. Without giving the details away, the correct answer is H for variation 1, and T for variation 2.

So the question for thirders is: What makes you become halfers in variation 1?

(Of course, halfers need to explain why they become thirders in variation 2)

@John McLaughlin, Lucas and Pace

Your calculations for variation 2 show that the probability of heads is very very close to a third, but not exactly. It seems to me, though, that all your answers are slightly different. Anyone want to confirm what the exact probability is?

@Pace & @Pradeep re: Estimating the probability of a final heads flip in variation 2

Yes it seems that the Monte Carlo method was rather inaccurate. It may just be too slow or due to some quirk of the random number generator in javascript.

I turned my attention to analytic calculation using python's sympy package and came up with a similar method to Pace. Here's my code:

# P is the probability of k heads in a trial of n flips

# (see https://en.wikipedia.org/wiki/Bernoulli_trial)

P = (0.5**n) * binomial(n, k)

# Q is the probability of ending the experiment after a heads flip on the Nth trial

# for even N.

# (the logic is the same as in Pace's post above. basically we're summing over the

# range of possible number-of-flips with no preference to the order of flips and then

# requiring the final flip to be heads, hence the 0.5 factor).

Q = 0.5 * Sum(Subs(P, k, 1+2*(n-N/2)).doit(), (n, N/2, N-1))

Subs(Q, N, 100).doit() # prints 0.333333333333335

Plotting Q for smaller wake numbers N shows a very rapidly converging line from below. Ie the probability of stopping on heads never exceeds 1/3, so it seems the monte carlo approach was misleading.

Additionally:

Subs(Q-1/3, N, 100).doit()

# prints 2.05391259555654e-15

Subs(Q-1/3, N, 20).doit()

# prints -3.17891438450513e-7

Subs(Q-1/3, N, 10).doit()

# prints -0.000325520833333093

One last thing! If I replace the 0.5 instances with (1/2) in the above sympy code, it gives us the exact analytical probability for stopping on heads after 100 wakings:

cosh(98*asinh(sqrt(2)/4))/2251799813685248 + 5*sinh(98*asinh(sqrt(2)/4))/6755399441055744

which is:

422550200076076467165567735125 / 1267650600228229401496703205376

sympy can evaluate it to arbitrary precision:

0.3333333333333333333333333333330703796983

which is 1/3 – 2.629536351e-31

Thanks, John.

Wolfram Alpha with sum_{i=0,49} 0.5*(0.5^(99-i))*binom(99-i,i) confirms the discrepancy in the 31st decimal place and gives another 100 decimal places

I guess that's fairly close to a third…

Regarding credence updating in JohnBal's variant and the original SB problem, there are two responses: the mathematical and the philosophical.

Mathematically, see my comment at #comment-364304. For any q in the whole family of SB-like problems interpolating between the original and the variant, credence is updated: it becomes q/(1+q) upon being interviewed. In the original problem, q=1, so this updated value just happens to coincide with the original.

Philosophically, what makes the original problem so special that the update has no effect? You already know the answer: it's the "no new information" argument. My personal, by no means widely held, view is that "credence" in experiments like this has two components — probability and elapsed time — and that elapsed time *cannot* be a part of the sample space to which probability is assigned. (See also Nick Bostrum at http://www.anthropic-principle.com/preprints/beauty/synthesis.pdf .) One should reason about the sample space (in our case, the H/T flip and wake-or-not decision) by the conventional rules of probability, then mop up temporal uncertainty by the principle of indifference.

Very close to a third.

I expanded my method to odd numbers of wakings and noticed that they all provide greater than 1/3 chance of finishing with heads. Specifically the chance when stopping on [1, 2, 3, …] wakings is [1/2, 1/4, 3/8, 5/16, 11/32, 21/64, 43/128, 85/256, 171/512, 341/1024, …] which are alternatively greater than and less than 1/3.

This means according to my variant 3 (with 4 beans in each bag) you should pick tails if there were an even number of wakings and heads otherwise.

Just to be clear, here is my variant 3:

Each bag has 4 beans. After a heads toss, bag H has 4J and T has 4K.

After a tails toss, bag H has 1J and 3K whereas bag T has 3J and 1K.

The aliens run the experiment for as long as they like, but on the final waking they tell Sleeping Beauty how many times she has been woken and ask her to draw from one of the bags. Which bag should she choose to maximise her chances of survival?

Answer: T if the number of wakings was even, otherwise H.

@Dradeep: "So my question is, why won't you update your credence in the original SB problem?"

Because, unlike the modified problem (where we can derive additional information from being woken up), there is no new information being imparted by being woken up. In the original problem, we expect to be woken up in either scenario, so there is nothing to update.

In the modified problem, we don't expect to be woken up in both scenarios. So, the act of being woken up does supply new information.

To put it another way, if you think I should update the probability that a heads or a tails has been thrown, tell me what new information has been imparted by being woken up (in the original problem) that allows me to do so.

The halfers are wrong. Most commonly, their argument seems to rest on the fact that the coin flip is a single event, which, with a fair coin, produces "heads" with a probability of 1/2, and thus, our (awakened) Beauty should say heads. But sitting in front of them is a Beauty who has been awakened–thus, there is additional information to consider: simple Bayesian analysis asks us to determine the probability of A given the probability of B. In her awakened state, she knows that one of three descriptions correctly apply–that it is a Monday, after the toss having produced "heads", that it is a Monday, the toss having produced "tails", or that it is a Tuesday, the toss having produced "tails". She has no information about whether she was awakened before, but she does know that she will be awakened in one of three possible states. Since, in two of those states, she is awakened with the coin having displayed "tails", if she wishes to be correct in her guess, she should say, "One-third." This is reflective of the fact that she is being asked the question about her certainty of the outcome of the original event. She will be asked the question more frequently if the coin comes up "tails", and she should take this into account.

An interesting way of considering the problem can be put in terms of winning. Let's suppose that instead of asking Beauty about her degree of certainty regarding "heads", she is instead simply asked, "Did the coin come up heads or tails?", and, if she is correct, she is given a gold coin.

Now, again, she rightly determines (one hopes) that she is in one of three possible situations, in two of which the coin came up "tails". She has no other information. If the experiment is done only once, then Beauty, if she says "Tails" at every opportunity, will go home with either no gold coins (if the coin came up "heads") or with two gold coins (it showed "tails"). Conversely, if she says "heads" at every opportunity, she goes home with one gold coin or no gold coins. Though she knows the actual, isolated probability of the coin toss coming up "heads" is 1/2, an observer hopes (for the sake of Beauty's enrichment) that she realizes she will be asked about the coin's outcome twice if it came up "tails", and only once if it came up "heads".

To really bring this home to the analyst, instead of the gold coin going to our (awakened) Beauty, let's have the gold coin go into one of two accounts: that of the "heads" response, or that of the "tails" response. We run many (let's say N) iterations of the sequence. At the conclusion of those N iterations, which bank account should the analyst choose? And should that choice not be reflective of the "certainty" that Beauty has regarding the outcome of the coin toss?

The remainder is no more than a hill of beans.

Probability paradoxes are designed to be confounding, and the Sleeping Beauty problem is "a beaut." Here is my take.

Let's start with the thirders' argument that there are three possible situations for SB when she is awakened, and they are indistinguishable

to SB, therefore each has the same probability. This is not correct reasoning. Consider the following game: a fair coin is flipped;

if it comes up tails, a second coin is flipped. SB sees nothing, so the three possible outcomes are "indistinguishable to her"; but their

probabilities are nonetheless H 1/2, TH 1/4, TT 1/4.

Better arguments (but still not proofs) for either the thirders or the halfers can be fashioned by turning the problem into a precise game

and determining SB's strategy in that game. Reader eJ has done so very elegantly; it is straightforward to prove that SB's correct strategy in eJ's

Game 1 is bag H and in Game 2 bag T. As arguments for the SB's original problem, these rely on mechanisms that are designed to motivate a probability

estimator to choose her "correct probability."

Such mechanisms usually work, but fail here because of the weird circumstance that the number of times SB is queried depends on the answer.

Here is an example of such a mechanism, that I call the weatherman's game. You want to rate a weatherman who gives a daily prediction of the

probability of rain; how do you do it? It seems at first reasonable to check whether (say) it rained 20% of the time that he said "20%," but

knowing that it rains 20% of the days over the span of a year, the weatherman could just predict "20%" every day and come out looking like a

genius.

One (not the only) way to combat this is to award p^2 "points" to the weatherman when he predicts that it will rain with probability p% and

it does rain; (1-p)^2 points when it doesn't. It's not hard to show he is now motivated to predict what he thinks is the "actual" probability.

We can try the same thing on SB, but wait: does she get rewarded twice if she's awakened twice, or is she just awarded just once per experiment?

If the former, she should say "I predict heads with probability 1/3"; if the latter, "1/2." In short, game arguments are not persuasive.

Another way to say it: Suppose the experiment is repeated 1000 times. Then the SB who guesses "heads" will be right with only 1/3 of her guesses,

but she'll be right in half the experiments. Do we count guesses, or experiments?

What is persuasive is the halfers' argument that the "a priori" probability that the coin came up heads was 1/2, and since SB knew that she would

be awakened, finding herself awake adds no information and cannot change the probability. This tells us that we should be counting experiments,

not guesses; the two guesses that SB gets to make in the tails case are a snare and a delusion. There's only one coin!

Hello all – suppose we change the original experiment from

"tails means wakeup on Monday & Tuesday"

to

"tails means wakeup on Monday & Tuesday & Wednesday & the next million days"

Does that affect anyone's reasoning here?

(…my first reaction was also that this is similar to the Monty Hall problem.)

I'll attempt to answer my own question…my maths are rubbish so didn't want to clutter the question with it but intuitively, I'm a "thirder".

When I awake, I think the chance it is a particular day is:

( the chance the flip was heads or tails )

multiplied by

( the chance it is a *particular* day given the outcome of the flip )

So:

Monday, heads: 1/2 * 1/1

Monday, tails: 1/2 * 1/million

Tuesday, tails: 1/2 * 1/million

Wednesday, tails: 1/2 * 1/million

…

Millionth day, tails: 1/2 * 1/million

Even though it SEEMS like there are so many more opportunities to be awake following 'tails', the chance it is any particular day out of those million days is just 0.0000005.

The total likelihood of waking up on all those days following a tails outcome still just sums to 50%.

That surprised me – after working through it, I've become a "halfer"!

@John

If you want to become a thirder again, ask yourself if you play the experiment out multiple times how many times she'd be wrong in answering.

I am neither a mathematician, nor a statistician.

Therefor to my relatively untrained mind, I see this as follows–

If SB, tosses a head, with an unbiased coin, this will lead her down one "timeline", 50% of the time.

If she tosses a tail, then similarly, this will take her down the only other possible "timeline", the other 50% of the time.

The "HEADS timeline" and the "TAILS timeline", should occur with equal probability.

If , SB tosses a tail, and then is subsequently woken up then this act doesn't actually alter the 50% probability of the "TAILS timeline".

It doesn't matter , either, how many times that SB is awakened, the "TAILS timeline" will still occur only 50% of the time.

Whether she is woken twice, or indeed a million times matters not a jot.

If SB is woken for the 648th. time, then the probability of that happening is 1/648 x 0.5 (and NOT 1/649).

In general the probability that she is awoken for the Nth. time on the "TAILS timeline," is 1/N x 0.5, ( and NOT 1/N plus 1, as the thirders, would have us believe )

I think.

So it's halfers, for me.

It's great to see more comments by both halfers and thirders. It's my hope to get to the bottom of the thought processes of both groups, because after all, that’s the only thing that’s at stake here. When it comes to the actual business of determining probabilities and making a bet, sophisticated* thirders and halfers agree on what the right strategy is. The laws of probability are not in question here – they are rock solid, and have a clear-cut answer for every specific question. What halfers and thirders are effectively arguing about are their intermediate mental states – it’s an argument to the effect “My way of thinking is better than yours!”

Nevertheless, like political debates, it does generate a lot of passion, so, as a committed thirder, let me fan the flames . The halfers here – eJ, Pace, Peter Winkler and others – insist that there is no "new" information when they are woken up. This question was eloquently answered by thirder Perry Clark, using standard Bayesian arguments. The prior probability of heads is one-half. When you are awakened, you have to update that with the fact that the frequency of being woken up on the tails time-line is 2/3. So the posterior probability of heads is 1/3. The proof of the pudding in probability problems lies in how you would bet on a standard, simple, even-money bet, and what your expectation would be. If SB is offered an even-money dollar bet on whether the coin came up heads or tails, she will make money on it only if she accepts the thirder position and goes with tails – the expectation is that she will make a dollar an experiment if she bets on tails (reflecting a thirder's subjective probability), and nothing if she bets on heads (reflecting a halfer's).

I am sure most of the halfers here will agree with the above course of action, and bet on tails too. So my question is, what new information did you receive when the bet was offered? I can think of two possibilities. The first one is that the halfer is waiting for a fully-specified situation where you can calculate exact, objective, probabilities. In this case, my question is: why have an intermediate position at all? After all, every thirder was also once a halfer, based on objective probabilities on Sunday (the "fair coin" proviso guarantees that), but a thirder updates his or her subjective probability based on the additional information that the probability of waking is twice as much in the tails time-line, in the standard Bayesian way. This puts SB in the correct position to address a standard simple bet about the probability of heads based on the fact she finds herself awake. A halfer does not want to do the updating till objective probabilities are calculable, which means, in this case, that the halfer has no intermediate "subjective probability" at all.

The second reason why the halfers may not want to update their subjective probability is that the information about the wakings was given beforehand, and therefore was not "new" at all. As I thirder, if I were asked on Sunday after the rules were explained, I would say even then that my subjective certainty of heads on waking would be 1/3rd – it can be calculated beforehand with the information given. This situation is realized on waking, so the update takes effect right away. For me, that is the new information, but it seems that some halfers disagree. If these halfers will update their subjective probabilities only when information is given after waking, we can arrange the experiment so that the instructions about the waking and the Monday tails amnesia are given anew every time SB awakes. It makes no difference to the experiment – the situation is identical. Now what will these halfers’ subjective probability be? If they still don’t want to update it until an explicit bet is made, then as above, they show they have no use for intermediate subjective probabilities. If they do update it, and become thirders, then they are being inconsistent, because they have exactly as much information as in the original SB problem.

So it seems to me that halfers are essentially timid or cautious (pick your adjective!) about updating subjective probabilities, essentially having no use for them, but going back to first principles and calculating everything from scratch only when offered situations where the odds are objectively calculable. On the other hand, thirders are pragmatic or bumptious (again pick your adjective!): they update their subjective probabilities more easily and keep them in readiness for the obvious simple bet. If they are faced with more complex bets that are based on earlier states of credence (such as in eJ’s variation 1), they have to go back to first principles and recalculate (as the halfers have to do every time). Thus, whether you are a halfer or a thirder seems to be a reflection of your personality more than your intelligence.

Another way to look at the halfer-thirder argument is to recognize (as Peter Winkler and others have done) that the halfer counts experiments, whereas the thirder counts wakings. This is certainly true. But remember that in the original SB problem, there is only one experiment, and the question is asked of SB on each waking. So I disagree with Peter that the two guesses on the tails time-line are “a snare and a delusion.” In fact they are very pertinent to the question asked, which pertains to

wakings. What is a snare and a delusion is not recognizing on waking that you now have new information, just because you were given the instructions beforehand.I must add that it is an honor to have my friend Peter Winkler comment on this column. Peter is an eminent puzzle guru, author of several books of very interesting and challenging puzzles. A few years ago, I had paid a tribute to Peter calling him "The Puzzle Gourmet" in my Numberplay puzzle column, which featured a remarkable puzzle that he had made famous:

http://wordplay.blogs.nytimes.com/2010/10/18/numberplay-the-puzzle-gourmet/

Thank you for your comment, Peter. I loved your weatherman story. But sorry, I disagree with your last paragraph.

@John McLaughlin,

You better hope that the aliens run their experiment for all eternity, because for Snow White to realize any advantage from her choice it would take something like a billion billion times the age of the universe.

@John Mowat,

As Jesse pointed out, in the middle of your musings you suddenly switched to answering a different question from the one you started with. To give a cricketing analogy which you as a Brit will hopefully appreciate: "The bat turned in your hand when you were making the stroke.” You have to figure out the odds of tails on being woken up, not the total likelihood of being woken up along each of two time-lines (which is as you rightly calculated, one-half). The probability of the coin having come up heads, given that you find yourself awake in your example, is 1 in a million and one.

*By sophisticated, I mean those who do not fall into the trap of answering the wrong question. The question is about your subjective probability of heads, given all the knowledge you have. As I mentioned in the article, the subjective probability can be anywhere from 0 to 1 and does not have to be half just because the coin is fair. You are assigning odds on what you think actually happened (whether the coin actually came up heads or tails – it was either one or the other – not what the possibility was before the coin was tossed (which is trivially half).

The original question (see http://www.maproom.co.uk/sb.html#sleeping), asks upon each wakeup: "what is your credence now for the proposition that our coin landed Heads?". This is emphatically a question to Beauty about the *experiment*. Half of experiments are Heads, and you have nothing during your one, two or a million wakeups that gives you any clue which half of the state space you're in. End of story.

The simple even-money bet argument proves nothing. Halfers will bet Tails (a bet, not a statement of allegiance) simply because the gain from a correct Tails bet is higher; we can show quantitatively how our Pr(Heads)=1/2 yields the correct choice. However, Thirders *cannot* show how their Pr(Heads)=1/3 yields the correct choice in the beans experiment.

btw, Pradeep's "second reason" for Halfer non-updating makes no sense at all. Halfers never update their probability in the standard SB experiment. The only relevant information about wakings is that they happen in either case, H or T, and are indistinguishable. There's no "intermediate subjective probability" here.

@eJ,

I realize now that your definition of credence is different than mine. I think you treat it more as an objective probability, which validates my point that halfers have no use for subjective probabilities.

For me, credence is simply which side I would pick in an even money bet in the situation I am in. So I don't have to make a calculation to show that Pr(Heads) = 1/3. I've already made it, and that's what my credence reflects.

I think we can agree that given a specific situation both of us get correct answers. As to what happens in our heads,to each his own.

I'm no expert, so I apologise in advance!

Question; looking as a distant observer we can imagine throwing a screen over the experiment, so the outcome is hidden in the description in the video. The result of the toss can be seen, and the experiment is repeated. We find .5 over the throws are H, as expected. Now remove the screen and repeat. No-one can say that the probabilities will change, surely.

The answer must be in the assumption that the waking events are of equal probability. If one side initiates a sequence of 100 awakenings, the cumulative probability of all those awakenings is still only a half.

It seems to me that the subjective probability "this is Monday" should be determined by the known probability of heads rather than the other way round.

@Pradeep, @eJ: Everyone can be right!

Following my earlier comment, I found this paper, which I think explains things better than I ever could: http://arxiv.org/ftp/arxiv/papers/0806/0806.1316.pdf

A hand-wavy argument for accepting that thirders and halfers are answering slightly different questions (and so are really just arguing about how the tricksy words translate to precise mathematics):

1. Statistics (Bayesian or otherwise) is probably not contradictory or wrong

2. Both the thirder and halfer camps have enough smart people that they probably aren't completely wrong

3. So the argument is probably one of defining the problem, and given the intractable split, two interpretations of the written problem are valid.

An attempt at to summarize the kernal argument in the paper I linked:

1. The experiment setup gives SB all available information apart from the result of the coin-toss – there is no additional information gained by SB throughout the experiment – SB (like us) could calculate her optimal answer in advance and skip the unpleasant drugs.

2. The SB problem asks of SB "what is your belief the coin landed Heads" – but this question is not specific enough to define the event we are asking about (hence the apparent paradox)

3a. An answer of 1/2 is referring to the coin-toss event itself, without reference to the rest of the experiment.

3b. An answer of 1/3 is referring to state of the coin in the awakening event.

4. The statistical games with jelly-beans or bets make the problem concrete – and put us firmly in either the coin-toss event or the awakening event, depending on the game. The simplest I can think of:

4a. SB makes a bet on the result of the coin on Monday. Should the coin be Tails, SB makes a 'dummy' bet on Tuesday (so she cannot infer any new information). SB has no preference between Heads or Tails – her "belief" is Pr(Heads)=1/2

4b. SB makes a bet on the result of the coin each day, and is paid each time she is correct. Knowing she will have more opportunity to maximize her winnings under a Tails timeline, she prefers to chose tails, giving Pr(Heads)=1/3

In my mind, the above illustrates _why_ "a specific situation everyone gets correct answers". There can be no ambiguity when there is a clearly defined utility function to optimize – it is this clarity the original question is missing.

@Josh, @Pradeep, @eJ

Thanks for the link Josh, I will complement it by a new one, from the same author, it's ongoing research: http://www.qi.damtp.cam.ac.uk/sites/default/files/QSBP_pitts.pdf

May I present also my 'research' in the same field but with an image, a computed image, http://i.stack.imgur.com/dugFZ.png, that I wish let us think about: In how many way we can try to describe the same things, but we failed in convincing us, each others, that we could all be in the correct.

I have also a PUZZLE SUGGESTION, a Toy Model, a nice one – Thanks

Some of you may be interested in the literature in philosophy on this intriguing puzzle. A detailed bibliography may be found at

http://philpapers.org/browse/sleeping-beauty/

@Josh

Of course you are right – thirders and halfers are answering different questions, as Paul Smaldino and others here have pointed out, and I also acknowledged. But it seems to me that halfers are addressing the less interesting, almost trivial question in the context of the SB problem. The entire paraphernalia of the problem – the number of wakings (whether one, two or one million), the selective amnesia, the asking of the question on each waking – all these things mean nothing to them: their answer will reflect the original fair coin probability no matter what. I'd like to explore the halfer mindset a little deeper, to see what kind of information would cause them to update their subjective probability for heads.

Let’s abolish the Tuesday waking. SB is told that she will be awakened on Monday on the basis of a fair three-sided die. If the original coin landed heads she is woken up on Monday with probability 1/3. If the original coin landed tails she is woken up on Monday with probability 2/3. Again the question is put to SB on waking as in the original problem – what is your credence (I prefer "subjective probability") that the coin landed heads. Now do halfers still say one-half or do they update their subjective probability of heads to one-third?

It seems to me that this problem is very similar to the standard textbook example of Bayesian updating. If your prior probability of having a disease is 1/2, and you are tested using a procedure that is correct 2/3rds of the time, your chance of having the disease is 2/3 if the test comes up positive and 1/3 if the test comes up negative. To me as a thirder, this situation is the same as in the SB problem: in both cases, you are twice as likely to be on the tails timeline on waking – SB’s Monday amnesia in the original problem ensures that. I’d be interested to know why halfers consider this different (assuming they agree that the Bayesian calculation is right).

I get the probability calculations and see how they work out. My thought is that do we know if future events influence past events? Quantum mechanics is saying that a future event influences a past event through information. In this case then the wakings influence the coin flip and I would think the thirds are right. But if the future and past are really distinct and the events are truly unconnected by information, then the halfs are right.

I have an argument related to that of my fellow halfer @Seth above. I think the key concept in the original problem is distinguishability. (This is a similar situation to quantum vs. classical statistics, where the statistical mechanics equations are different for distinguishable and indistinguishable particles.) If, as in the scenario that @JohnBal posits above, there is some way for SB upon awakening to gain additional information about the original coin flip (analogous to distinguishing between classical particles), then this new information can have a Bayesian influence on the coin probabilities. However, as @Pace implies in his comment above, in the original problem there is no way for SB to gain any new information upon awakening (analogous to the fact that there is no way to distinguish between identical quantum particles). This being the case, there is no applicable Bayesian argument, and the original "fair" coin toss probabilities must hold true.

I would like to analyze the original problem.

I'm going to do that by using the definition of conditional probability

(see https://en.wikipedia.org/wiki/Conditional_probability#Conditioning_on_an_event.)

P(B|A)=P(A&B)/P(A)

but apply it in this equivalent form:

P(A&B)=P(A)*P(B|A)

SB is awakened and interviewed. There are three mutually exclusive cases:

Case 1. The coin toss was heads & the day is Monday

Case 2. The coin toss was tails & the day is Monday

Case 3. The coin toss was tails & the day is Tuesday

Case 1.

P(The coin toss was heads & the day is Monday) =

P(The coin toss was heads)*P(the day is Monday | The coin toss was heads)

P(The coin toss was heads) = .5

— When the coin toss was heads and SB is awakened and interviewed it must be Monday.

P(the day is Monday | The coin toss was heads) =1.0

So the probability of Case 1. is .5*1.0 = .5

Case 2. (Case 3. is similiar)

P(The coin toss was tails & the day is Monday) =

P(The coin toss was tails)*P(the day is Monday | The coin toss was tails)

P(The coin toss was tails) = .5

— When the coin toss was tails and SB is awakened and interviewed she has no way of knowing whether it is Monday or Tuesday. Also, when the toss is tails, both days occur with the same frequency, so I suppose she would have to consider each day equally likely.

P(the day is Monday | The coin toss was tails) = .5

Hence the probability of Case 2. is .5*.5 = .25

Is it possible that some people believe that all three cases are equally likely? They are not.

Not to beat a dead horse, but in my previous comment I didn't explicitly finish the analysis of the original problem using conditional probability.

The original problem stated:

Whenever Sleeping Beauty is awakened and interviewed, she won’t know which day it is or whether she has been awakened before.

During each awakening, she is asked: “What is your degree of certainty that the coin landed heads?” What should her answer be?

In my previous comment I showed there are three possible cases for each awakening, with probabilities:

Case 1. P(The coin toss was heads & the day is Monday) = .5

Case 2. P(The coin toss was tails & the day is Monday) = .25

Case 3. P(The coin toss was tails & the day is Tuesday) = .25

Since SB has no way of knowing which of the three cases she has awakened in and Case 1. is the only one with heads, her answer should be .5.

To me there is a very simple viewpoint that proves that the halfers are correct. SB knows beforehand that she is going to wake up for sure (100% probability). Also she cannot know if it's the first or second time she is awoken. So upon finding herself awake there is no new information that she didn't have beforehand. There is nothing to Bayesian update – you only have the prior – which is 0.5 heads.

This is basically just agreeing with what Patrick O'Keefe argued very clearly on January 29 at 3:15p.

The "problem" with the Sleeping Beauty Problem, is trying to define the probability distribution of a random variable when the number of samples depends on the random variable. But there is a simple way to change the variables and eliminate this problem. Use four volunteers and only one coin.

Each volunteer will go through a similar set of experiences on Monday and/or Tuesday, but the specifics will vary. On Monday after Heads, all but SB1 will be wakened. On Monday after Tails, all but SB2 will be wakened. On Tuesday after Heads, all but SB3 will be wakened. On Tuesday after Tails, all but SB4 will be wakened. And instead of "the probability of Heads" or ""the probability of Tails," each is asked for the probability that she will be wakened only once during the experiment.

A simple comparison shows that SB3 is following the exact same schedule as the original Sleeping Beauty, and that the questions they are asked are functionally equivalent. Further, while the other three are following different schedules, all of the issues involved are fundamentally the same, so they all should give the same answer.

But that answer is now trivial to discern. Any of the volunteers who is awake knows (A) that she is one of three volunteers who is awake, (B) that exactly one of the three who are awake will satisfy the condition in the question she has been asked, and (C) that each of the three wo are awake is equally likely to be the one.

The answer is 1/3.

The third day is irrelevant in the sense that there is no determining factor in flipping the coin for Beauty. The experiment will be over, and she will awaken on Wednesday regardless of the result of the coin toss. For Beauty, the probability simply becomes 1/2 because of the two sides of the coin, and one toss that she is aware of on Wednesday. It's like asking someone the probability of heads or tails, alone, regardless of the history of the two tails tosses that kept her asleep. Or is she supposed to consider a history she is unaware of?