2015-01-31

Newcomb’s Paradox in the Light of the Superposition Imperative

First formulated by the theoretical physicist William Newcomb in 1960, Newcomb’s paradox is the prime and most acute paradox of decision theory. Ever since its first publication by the philosopher Robert Nozick in 1969 it’s been under constant debate and sown fruitful division amongst philosophers. More importantly, it still poses serious problems for decision theory. Since it’s still considered to be unsolved, development in philosophy has been long in the wanting. Let us demonstrate what we believe arises from the ashes that Newcomb’s paradox has left parts of decision theory in.

There are already quite a few formulations of the paradox, but we’d nevertheless like to provide our own formulation. This is due to the fact that we actually believe that the situation described in the paradox neither is logically impossible nor would be physically impossible to create in reality. We formulate the paradox as follows:
You’re participating in a game in which your objective is to maximize your income, i.e. the amount of money which you win. After you’ve been informed of the rules of the game, you’ll enter a closed passageway in which there, at the other end, is a table with two different boxes on it. One of the boxes, let’s call it A, is transparent and contains £1,000. The other box, let’s call it B, is opaque and contains either £0 or £1,000,000. At the table you have the peculiar choice of either taking only the opaque box B or, in fact, both boxes A and B. What’s most interesting, though, is that before you were let into the passageway an entity, let’s call it the Predictor, made a prediction of which choice you’d make. The Predictor has not supernaturally seen into the future; rather its prediction is independent of your choice and therefore fallible. If the entity predicted that you’ll take both boxes, then the opaque box B will contain £0. If, on the other hand, the entity predicted that you’ll only take the opaque box B, then the opaque box B will contain £1,000,000. You also know that this game has been played many thousands of times before and that the entity’s prediction has been correct in 99 percent of the cases. Furthermore you know that the cause of the predictions’ correctness is not that 99 percent of the persons have made the same choice, but rather that the Predictor has had individual knowledge of each person. This entails that the vast majority of the persons who have taken both boxes have left the game with an income of only £1,000, whilst the vast majority of the persons who have only taken the opaque box B have left the game with an income of £1,000,000. Which choice do you make? Are you a one-boxer or a two-boxer?
Amongst philosophers and others interested in Newcomb’s paradox far from 99 percent seem to favour the same choice. On the contrary, the two choices seem to divide both professional and amateur philosophers roughly in half, which makes the paradox the more realistic and acute.

There’s a seemingly conclusive argument for taking both boxes: When you make your choice, the contents of the opaque box B are already decided, wherefore you’ll necessarily earn £1,000 more by taking both boxes, regardless of the prediction. If the opaque box B contains £0, you’ll earn £1,000 more, i.e. a total of £1,000, by taking both boxes. If the opaque box B, on the other hand, contains £1,000,000, you’ll likewise earn £1,000 more, i.e. a total of £1,001,000, by taking both boxes. For your choice it therefore doesn’t matter what the opaque box B contains.

If you wish to study the paradox in closer detail, without being influenced by our reasoning and proposed solution below, we’d suggest that you take your time and temporarily stop reading here
.

There’s another similar and again seemingly conclusive argument for taking both boxes: If you’d been allowed the help of a Loyal Friend, who, as soon as you entered the passageway, was allowed forward to peek inside the opaque box B and tell you its contents, it wouldn’t make any difference! If the Loyal Friend would yell that the opaque box B is empty, you’d increase your income by £1,000 by taking both boxes. If the Loyal Friend, on the other hand, would yell that the opaque box B is full of money, you’d likewise increase your income by £1,000 by taking both boxes. For your choice it therefore neither matters what the opaque box B contains nor if you’d at all been allowed the help of a Loyal Friend.

Despite these two similar arguments for taking both boxes, it seems that taking only the opaque box B entails a higher income, in the sense that the vast majority of the one-boxers have earned quite a lot more money than the vast majority of the two-boxers. Since the game is all about maximizing your income, Newcomb’s paradox is to us largely a question of how, in the light of these two arguments, the choice of taking only the opaque box B can be motivated. This is to us the choice which we’re intuitively content with. What we’re in search of is a general theory for making decisions which will explain why these two seemingly conclusive arguments not only are incapable of settling the matter, but, in fact, are fundamentally misguided.

Our proposed solution of Newcomb’s paradox is derived from a general imperative of decision making which we call ‘the superposition imperative’ and which reads:
Always act as if everything which you don’t currently observe were in a superposition of all possible physical states, wherein each of these states is assigned a Bayesian probability based on the information which you have.
In quantum physics it’s common to describe a particle as if it’s in a superposition of all possible states until one of these states actually is observed, at which time the superposition of the particle collapses into the observed state. This means that as long as none of the possible states is observed, we act as if the particle were in all possible states at the same time, with an assigned probability for each possible state. To act as if the particle were in all possible states simultaneously is to act as if it’s undecided which of the possible states that the particle is in.

Let’s discuss the classical thought experiment Schrödinger’s cat, in which we conduct a physical experiment with a live cat placed inside an opaque chamber together with a lethal poison, which would be triggered by the decaying of an atom of a radioactive source, which also is present. We can’t observe the cat whilst it’s inside the chamber. Neither can we, according to quantum physics, predict when the first decaying of an atom actually will occur, but we do know the appropriate probability distribution. If we’d apply the superposition imperative on Schrödinger’s cat we’d act as if the cat were in a superposition of the two states ‘alive’ and ‘dead’. If we’d recently commenced the physical experiment, we’d still have reason to believe that the cat is alive and reason to assign a high probability to the state ‘alive’. The longer we wait, though, the more the Bayesian probability that the cat is alive decreases. If other persons, on the other side of the chamber, in fact would be able to observe the cat, it wouldn’t be in a superposition to them, but would still be to us. If we’d be asked whether the cat is alive or dead we’d answer that we don’t know and that the cat is in a superposition of these two states, and we’d provide the Bayesian probabilities of these two states based on our own personal beliefs.

Returning to Newcomb’s paradox in the light of the superposition imperative, you’ll find that any decision anxiety is over. When you’re in the passageway you simply act as if the content of the opaque box B is in a superposition of the states ‘£0’ and ‘£1,000,000’. The first seemingly conclusive argument, that you’ll necessarily earn £1,000 more by taking both boxes, is based on the correct assumption that the content of the opaque box B is already decided, which it certainly is to the Predictor. With your incomplete information, on the other hand, and utilizing the superposition imperative, you act as if it’s undecided whether the opaque box B contains £0 or £1,000,000. If you choose to take both boxes, instead of only the opaque box B, your choice will immediately come at a price, namely that the Bayesian probability that the opaque box B contains £1,000,000 decreases from approximately 99 percent to approximately 1 percent. Therefore you maximize your income by taking only the opaque box B, which, with the probability approximately 99 percent, will earn you £1,000,000.

The second seemingly conclusive argument, that if you’d been allowed the help of a Loyal Friend it wouldn’t make any difference, is, interestingly enough, illuminated in a different way by the superposition imperative, and in a beautiful way at that. Whilst roughly half of both professional and amateur philosophers seem to be one-boxers without the help of a Loyal Friend, nobody seems to be a one-boxer with the help of a Loyal Friend! In what way is only taking a box which contains £1,000,000 actually maximizing your income, when you also can take a box containing £1,000? In no way. If you’d been allowed the help of a Loyal Friend, who, as soon as you entered the passageway, was allowed forward to peek inside the opaque box B and tell you its contents, what you’d hear would be an observation increasing your information in such a way that the superposition of the opaque box B would collapse. With the help of a Loyal Friend you therefore take both boxes regardless of what the Loyal Friend has to reveal. Without the help of a Loyal Friend, on the other hand, the superposition of the opaque box B doesn’t collapse until you’ve made your choice, wherefore you only take the opaque box B.

Whilst this solution proposal explains the intuitively decisive difference between the two seemingly conclusive arguments, it also prescribes the choice which were intuitively content with.

Below we give a mathematical account of our proposed solution of Newcomb’s paradox.

We denote the entity’s prediction X and your choice Y. These are the only two random variables we have in the paradox formulation. We denote an outcome of X x and an outcome of Y y. x assumes either the state that you’ll take both boxes A and B, which we denote ab, or the state that you’ll only take the opaque box B, which we denote b. y assumes either the state that you take both boxes A and B, which we denote AB, or the state that you only take the opaque box B, which we denote B. This is expressed in the following way:
x ∈ {ab, b}
and
y ∈ {AB, B}.
The probability that the prediction is that you’ll take both boxes is, for example, expressed like this:
P(x = ab) = P(ab).
The probability that:
The prediction was that you’ll take both boxes
given that:
Your choice was to take both boxes
is, for example, expressed in the following way:
P(x = ab | y = AB) = P(ab | AB).

The expected income EAB pounds sterling of taking both boxes A and B amounts to:
EAB = 1,000×0.99 + 1,001,000×0.01 = 11,000
which is the average income of a two-boxer. The expected income EB pounds sterling of taking only the opaque box B amounts to:
EB = 0×0.01 + 1,000,000×0.99 = 990,000
which is the average income of a one-boxer and incidentally 90 times the average income of a two-boxer.

To maximize your income is to maximize your expected income E in accordance with the von Neumann–Morgenstern expected utility theory. E pounds sterling is calculated by the following formula:
E = 1,000P(ab, AB) + 1,001,000P(b, AB)
+ 0P(ab, B) + 1,000,000P(b, B)
in which the joint probability distribution of the random variables X and Y is used. The joint probability distribution P(ab, AB) is, for example, the probability that the entity predicts that you’ll take both boxes and you actually take both boxes.

The definition of conditional probability implies the following equation:
P(X, Y) = P(X | Y)P(Y)
which, used making a substitution in the first formula for E, implies the following, second formula for E
E = 1,000P(ab | AB)P(AB) + 1,001,000P(b | AB)P(AB)
+ 0P(ab
| B)P(B) + 1,000,000P(b | B)P(B).

The first seemingly conclusive argument, that you’ll necessarily earn £1,000 more by taking both boxes, presupposes that your choice doesn’t affect the entity’s prediction, since its already been made. This would imply the relationship: 
P(X | Y) = P(X)
which, used making a substitution in the second formula for E, implies the following, third formula for E:
E = 1,000P(ab)P(AB) + 1,001,000P(b)P(AB
+ 0P(ab)P(B) + 1,000,000P(b)P(B).
As can be seen, you’d maximize E by maximizing P(AB) or, in other words, by maximizing the probability that you take both boxes. (To realize this, note the coefficients; for taking both boxes A and B the coefficient is £1,000 more for each of the two possible outcomes of the prediction.) To maximize E would therefore be to take both boxes, but according to the superposition imperative you should act as if the content of the box were in a superposition. This means that the relationship:
P(X | Y) = P(X)
is false and that your choice affects the probability distribution of the prediction. In the second formula foE:
E = 1,000P(ab | AB)P(AB) + 1,001,000P(b | AB)P(AB)
+ 0P(ab
| B)P(B) + 1,000,000P(b | B)P(B)
you should instead assign a Bayesian probability to these unknown probabilities based on the information available to you. If your choice is to take both boxes A and B, i.e. if P(AB) = 1, then you’ve a strong reason to believe that the entity correctly predicted this choice. Your probability estimate of the entity’s prediction is therefore that the entity most likely did predict your choice to take both boxes A and B. In this case P(ab | AB) is close to one (1), whilst P(b | AB) is close to nought (0) and E is approximately £11,000. If your choice, on the other hand, is to only take the opaque box B, i.e. if P(B) = 1, then your probability estimate is that the Predictor most likely predicted this choice. In that case P(b | B) is close to one (1), whilst P(ab | B) is close to nought (0) and your expected income is approximately £990,000, which is considerably more than approximately £11,000. Our conclusion is therefore that you should only take the opaque box B.


Överlevnadsmaskinen and Filosofimaskinen




Note: We’ve reworked the mathematical account in this blog post thanks to the cleverness of, and communication from, one of our readers.