The sleeping beauty paradox is a classic among philosophers, decision theorists and statisticians. Here’s my take on it.
Sleeping Beauty is put asleep on Sunday and awakened on the following day, Monday or the following two days, Monday and Tuesday. The story goes that Sleeping Beauty never remembers being put asleep. The official paradox ascribes that effect to an amnesia-inducing drug but it could well be that Sleeping Beauty is sleeping figuratively as well. Whether she is awakened once or twice depends on a fair coin toss performed on Sunday. If the coin comes up head, Sleeping Beauty is awakened on Monday only. If the coin comes up tails, Sleeping Beauty is awakened on Monday and Tuesday. At the end of the experiment, either Monday or Tuesday, she is asked to assign probabilities to the scenario where the coin landed head and the scenario where the coin landed tails.
One way Sleeping Beauty could answer the question is as follows:
[Half] I don’t remember whether I woke up once or twice. It’s a fair coin. So, it’s equally likely that the coin landed tails as it landed heads, so the probability of each scenario is 1/2.
Another way Sleeping Beauty could answer the question is as follows:
[Third] I woke up. Given how the experiment is set up, I’d wake up twice twice as many times than I’d would wake up once if the experiment ran many times. So, the probability that the coin landed tails is 2/3 while the probability that the coin landed heads is 1/3.
How should Sleeping Beauty answer?
Let’s formalise the problem.
The coin is fair. Hence, the worldly probabilities of heads and tails are equal:
P_World(H) = P_World(T) = 1/2
The question asked: what probability should sleeping beauty assign to the scenario that the coin landed heads and tails respectively:
P_SleepingBeauty(H) = ???
The first answer available to Sleeping Beauty appeals to the coin’s intrinsic probabilities. She knows that a fair coin has been tossed and that her waking up day depends on the outcome of that coin toss. So, Sleeping Beauty relies on her knowledge of the worldly probabilities to hold that the probability that the coin toss came up heads is 1/2.
The second answer available to Sleeping Beauty appeals to the chain of events. This line of reasoning exploits the asymmetry between the two scenarios.
P(Heads scenario) = P(Mon) = 1/3
P(Tails scenario) = P(Tue) = 2/3
Waking up happens twice as much in tail worlds as in heads worlds. Sleeping beauty, being agnostic about the type of world she’s in, could reason as follows. Since I woke up, there is a 2/3 probability that I am in a tail world, a 1/3 probability that I am in a heads world.
Bayesian reasoning justifying this ‘move’:
P(H | Waking up) = 1 - P(T | Waking up) = P(Waking up | H)*P(H) / P(Waking up)
P(T | Waking up) = 1 - P(H | Waking up) = P(Waking up | T)*P(T) / P(Waking up)
Looking only at the ratio, the denominator can be scraped since it acts as normalisation constant
P(H | Waking up) / P(T | Waking up)
= [P(Waking up | H)*P(H) / P(Waking up)] / [P(Waking up | T)*P(T) / P(Waking up)]
= P(Waking up | H) / P(Waking up | T)
Explanation [Half] does not consider the chain of events leading up to the awakening of sleeping beauty. She considers the coin toss as an isolated / independent event and that’s how she evaluates its probability.
Explanation [Third] departs from the conditional probabilities of head and/or tails given the fact that she woke up. Hence, there is a tacit assumption of probabilistic dependency between the coin toss and waking up.
The experimenter knows about the probabilistic dependency between the coin toss and waking as it introduced by him/her/them. Sleeping beauty, however, from her perspective, does and should not know about these links. It’s part of the set-up of the thought experiment that she does not remember the amount of times she gone back to sleep. The correct probabilistic assessment of the coin toss for her is [Half].
The scientist is playing a demon, having a higher perspective than sleeping beauty, he/she/they sneaks in inaccessible knowledge into sleeping beauty’s probabilistic estimation space. The probability considered in [Third] is something akin to
P_SleepingBeauty(H | Experimental Setup) = P_SleepingBeautyAsSeeinByExperimenter(H)
When judging other people’s operating model of the world, we shall be careful not to intermingle our knowledge, perspective and foresight into our evaluation of their perspective. A (near) perfectly rational agent might look extremely stupid / ingenious when looked at from a significantly different knowledge background.