This article is from the Puzzles FAQ, by Chris Cole chris@questrel.questrel.com and Matthew Daly mwdaly@pobox.com with numerous contributions by others.

Newcomb's Problem

A being put one thousand dollars in box A and either zero or one million

dollars in box B and presents you with two choices:

(1) Open box B only.

(2) Open both box A and B.

The being put money in box B only if it predicted you will choose option (1).

The being put nothing in box B if it predicted you will do anything other than

choose option (1) (including choosing option (2), flipping a coin, etc.).

Assuming that you have never known the being to be wrong in predicting your

actions, which option should you choose to maximize the amount of money you

get?

decision/newcomb.s

This is "Newcomb's Paradox".

You are presented with two boxes: one certainly contains $1000 and the

other might contain $1 million. You can either take one box or both.

You cannot change what is in the boxes. Therefore, to maximize your

gain you should take both boxes.

However, it might be argued that you can change the probability that

the $1 million is there. Since there is no way to change whether the

million is in the box or not, what does it mean that you can change

the probability that the million is in the box? It means that your

choice is correlated with the state of the box.

Events which proceed from a common cause are correlated. My mental

states lead to my choice and, very probably, to the state of the box.

Therefore my choice and the state of the box are highly correlated.

In this sense, my choice changes the "probability" that the money is

in the box. However, since your choice cannot change the state of the

box, this correlation is irrelevant.

The following argument might be made: your expected gain if you take

both boxes is (nearly) $1000, whereas your expected gain if you take

one box is (nearly) $1 million, therefore you should take one box.

However, this argument is fallacious. In order to compute the

expected gain, one would use the formulas:

E(take one) = $0 * P(predict take both | take one) + $1,000,000 * P(predict take one | take one) E(take both) = $1,000 * P(predict take both | take both) + $1,001,000 * P(predict take one | take both)

While you are given that P(do X | predict X) is high, it is not given

that P(predict X | do X) is high. Indeed, specifying that P(predict X

| do X) is high would be equivalent to specifying that the being could

use magic (or reverse causality) to fill the boxes. Therefore, the

expected gain from either action cannot be determined from the

information given.

Continue to: