This article is from the Puzzles FAQ,
by Chris Cole firstname.lastname@example.org and Matthew Daly
email@example.com with numerous contributions by others.
Monty and Waldo play a game with N closed boxes. Monty hides a
dollar in one box; the others are empty. Monty opens the empty boxes
one by one. When there are only two boxes left Waldo opens either box;
he wins if it contains the dollar. Prior to each of the N-2 box
openings Waldo chooses one box and locks it, preventing Monty from opening
it next. That box is then unlocked and cannot be so locked twice in a row.
What are the optimal strategies for Monty and Waldo and what is the
fair price for Waldo to pay to play the game?
The fair price for large N is $0.6321205588285576784; I will offer
a proof along with optimal strategies.
Denote the game as G_N(). After (N-M) rounds of play, the game will have
the same form as G_M(). Depending on the strategies each of the M boxes
will have a probability p_i of containing the dollar. Let Waldo lock
the M'th box (renumbering the boxes if necessary). Denote the game and
Waldo's expected winnings in the game by
G_M(p_1, p_2, ..., p_M)
p_1 + p_2 + ... + p_M = 1
p_2 = p_3 = p_4 = ... = p_M
we adopt the abbreviation
G_M(b) = G_M(1 + b - Mb, b, b, b, b, ..., b)
and note that since probabilities are never negative:
1 + b - Mb >= 0, or
0 <= b <= 1 / (M-1)
Various G_M(p_1, p_2, ..., p_M) have difficult solutions but we are asked
only to solve G_M(1/M) and it turns out we can accomplish this by
considering only the games
G_M(b) where 1/M <= b <= 1/(M-1) 
Games of this form will be said to satisfy constraint .
Induction is used for one of the theorems, so we'd better solve G_2 and G_3
for our basis.
G_2(p_1, p_2) = max (p_1, p_2)
G_3(p_1, p_2, p_3) = max (p_1 + p_2, p_3)
since after Monty opens box #1, box #2 will have probability (p_1 + p_2)
and vice versa. When the probabilities satisfy constraint :
G_2(b) = G_2(1-b, b) = b
G_3(b) = G_3(1-2b, b, b) = 1 - b
The proof of Theorem 1 is based on the probability p_i that box #i
contains the dollar. (Of course this is Waldo's perceived probability:
Monty's probability would be 0 or 1.) It may seem wrong for Waldo to
"forget" the game history and remember only the computed p_i. For
example he may have previously locked some but not all of the boxes
and this fact is ignored except in the calculation of p_i. Or Monty may
have some higher level "plan" which mightn't seem to translate directly
into probabilities. But probability algebra obeys some simplifying
linearity rules and, if Monty keeps a "poker face", the probability model
is the only thing Waldo has to act on.
Especially paradoxical is the derivation of Waldo's p_i in his trivial
strategy below: he can adopt inferior but "correct" p_i to simplify the
If b >= 1/M then
G_M(b) = G_[M-1]( (1-b) / (M-2) )
We will show that Monty and Waldo each have a strategy in G_M(b) to
reduce the game to G_[M-1](b, q, q, ..., q) where q = (1-b) / (M-2)
and where the boxes have been renumbered so that box #1 was box #M
(the one Waldo locked) from the prior round and the new box #(M-1)
is the one Waldo locks next. Note that if Monty indeed arranges
the probability mixture G_[M-1](b, q, q, q, q, ..., q) it won't
matter which box Waldo locks (Box #1 has the only non-equal
probability but Waldo cannot lock the same box twice in a row);
this is a typical property of "saddlepoint" strategy.
We will derive the necessary and sufficient condition for Monty to
reduce any game G_M(p_1, p_2, p_3, ..., p_M) to a single game with
the form G_[M-1](b, q, q, ..., q). Using the numbering of G_M()
let R(i,j) be the joint probability that Box #i contains the dollar
and Box #j is opened by Monty in G_M(). We need consider only
M >= 3
R(i, j) >= 0
R(i, i) = 0
R(i, M) = 0, i < M
sum_over_j R(i,j) = p_i
and to achieve q_2 = q_3 = ... = q_[M-1] in G_[M-1],
R(1, j) = R(k, j)
for 1 < j,k < M and j != k
R(2, 1) = R(k, 1)
for 2 < k < M
and to make G_[M-1] be independent of Monty's play
R(M, j)/R(1, j) = R(M, 2)/R(1, 2)
for 2 < j < M
R(M, 2)/R(1, 2) = R(M, 1)/R(2, 1)
The above have a simple unique solution:
R(i, j) = (1 - p_M) / (M - 2) - p_j 
for i,j < M and i != j
R(M, j) = p_M - p_j * p_M / (1 - p_M) 
for j < M
p_j * (M-2) + p_M <= 1 
For the theorem we are given that G_M(b) satisfies constraint 
1 / M <= b <= 1 / (M - 1)
which implies the weaker inequality
(M - 3) / (M^2 - 3M + 1) <= b <= 1 / (M - 1)
and since for the constraint- compliant G_M()
p_j = b or p_j = (1+b-Mb) for all j
the inequality  follows directly.
Hence Monty can transpose G_M(b) into G_[M-1]( (1-b) / (M-2) )
whenever b >= 1/M and M >= 3. (This G_[M-1] also happens to
satisfy constraint  as needed for the next theorem.)
It should be easy to argue that this strategy is optimal for Monty,
but we want to derive Waldo's best strategy anyway and if it
guarantees the same value we know we're at the "saddlepoint".
If Waldo knows Monty has a non-optimal strategy he can take
advantage of it, but we will just derive a strategy good enough
to achieve the saddle-point value.
Monty must transform G_M(b) into some
G_[M-1](b, q_2, q_3, ..., q_[M-1])
where Waldo has the choice of locking any of boxes #2 through #(M-1).
If Waldo locks each of the (M-2) available boxes with probability
1/(M-2) it is easily seen that the average probability that he
locks the box with the dollar is (1-b) / (M-2) and the probabilities
q_2, q_3, ..., q_[M-2] will also have the average value (1-b)/(M-2).
If Waldo pretends to "forget" which box he picked before, he can
take q_2 = q_3 = ... = (1-b)/(M-2) thereby constructing the same
game Monty constructed with his saddlepoint strategy!
In the above Waldo in effect "degraded" the accuracy of his
probability estimates with the substitutions
q_2' = (q_2 + q_3 + ... + q_[M-1]) / (M - 2)
q_3' = (q_2 + q_3 + ... + q_[M-1]) / (M - 2)
If Waldo "knows" more than this, he can pretend he doesn't!
For example he can ask Monty to secretly shuffle the boxes.
Thus either player can reduce
G_M(b), b >= 1/M
so this must be the saddlepoint.
If b >= 1/M then
G_M(b) = 1 - 1/2! + 1/3! - ... - (1-b)(-1)^M/(M-2)!
= - sum (-1)^i/i! - (1-b)(-1)^M/(M-2)!
where the sum is over i = 1, 2, 3, ..., M-3
The proof is by induction. We know the theorem holds for M = 3
and we will assume it holds for (M-1). Set
c = (1-b) / (M-2)
We noted earlier that b <= 1/(M-1): otherwise p_1 = (1 + b - Mb)
is negative; hence we obtain
c = (1-b)/(M-2) >= (1 - 1/(M-1)) / (M-2)
c >= 1/(M-1)
Thus the condition of the inductive hypothesis is satisfied and
G_[M-1](c) = 1 - 1/2! + 1/3! - ... + (1-c)(-1)^M/(M-3)!
But from Theorem 1
G_M(b) = G_[M-1](c)
and from the definition of c,
c/(M-3)! = (1-b)/(M-2)!
which establishes the theorem.
G_M(1/M) = G_M(1/M, ..., 1/M) = 1 - 1/2! + 1/3! - ... -(-1)^M/M!
This follows directly from Theorem 2 and the observation that
(1/M)/(M-2)! = 1/(M-1)! - 1/M!
For large M, G_M(1/M) approaches (1 - 1/e). It will be a little bigger
when M is odd and a little smaller when M is even. I've appended the
numeric values below.
[[Solution for M =]Plb1+pdsb]sy
Solution for M =2
Solution for M =3
Solution for M =4
Solution for M =5
Solution for M =6
Solution for M =7
Solution for M =8
Solution for M =9
Solution for M =10
. . .
Solution for M =52
P. S. There are related unsolved problems:
(a) what about G_M(p_1, p_2, ..., p_M) that do not fit the pattern used
in the above solution?
(b) what if two boxes contain dollars? (first, what should the rules be?)
-- firstname.lastname@example.org (James Allen)