Newcomb’s Paradox
In this essay, I will give my version of Newcomb’s paradox, and then explain it.
Suppose that there is a brilliant but eccentric scientist. We’ll just call him “the professor”. The professor has invented a device that scans brain waves and generates a model of the way the brain makes decisions. The models generated by this device are highly accurate. The professor invites you to take part in an experiment for a payment of $100. Let’s assume that you fully trust the professor. His reputation is impeccable. You agree to the experiment.
It goes like this. You sit at a table in his laboratory, with a computer screen in front of you. The professor puts the brain wave scanner on your head. He assures you that it has no negative effects. It doesn’t use radiation or anything harmful. It just picks up brain waves. He tells you that the computer will present you with multiple choice problems. You are supposed to solve each problem to the best of your ability. While you are solving each problem, the computer will be reading your brain waves, and it will correlate them with the answer you choose. The whole process will take about an hour. After that time, the computer will have constructed a model of how your brain makes decisions.
The professor leaves the room, and the experiment begins. The computer presents you with a series of problems of various kinds: math problems, simple choices (vanilla or chocolate?), ethical dilemmas (trolley problems), and so on. For each one, you make a decision, click on the box on the screen, and then the computer moves on to the next problem. As the professor predicted, it takes roughly an hour.
When you are finished, the professor returns sipping a cup of tea. He takes off the scanning device, and says that he has one more problem for you to solve, to test the accuracy of the model. He puts two envelopes on the table in front of you. One is labeled “A”. The other is labeled “B”. The professor tells you that envelope B contains the $100 he promised as payment for the experiment. Envelope A contains either $1000 or nothing.
- A: $1000 or $0
- B: $100
You can take both envelopes if you want, or you can take either one by itself, or you can leave both and go home empty-handed. You are free to make any of those choices. But there is a wrinkle, he tells you.
A few minutes ago, he gave this problem to the computer model of your brain, and he decided whether to put $1000 in envelope A based on the model’s choice. If the model of your brain chose both A and B (if it was “greedy”) then the professor only put money in B. On the other hand, if the model of your brain chose only one envelope (A or B) then he put $1000 in A.
So, whether envelope A contains $1000 or $0 depends on what the model of your brain chose when presented with the same options you have now. The professor also tells you that the model has always predicted the person’s choice correctly, on thousands of past trials.
What should you do?
On the one hand, it seems obvious that you should take both envelopes. After all, the money is already either in A or not. Your choice now has no effect on whether or not A has money in it. Why pass up $100? Besides, there is a chance that there is no money in envelope A. If you only take A, then you might get no money at all.
On the other hand, you know that, in the past, everyone who chose only A received $1000, and everyone who chose both A and B received only $100. The model is very accurate. Thus, you can be reasonably certain that you will receive $1000 if you choose only A, and you will receive $100 if you choose both A and B.
Think about your choice.
Newcomb’s paradox involves self-reference and undecidability. Like all self-referential paradoxes, it has a referential loop and a negation. The standard format of a self-referential paradox is something like “This sentence is false”. If it is true, then it is false, but if it is false, then it is true. The sentence defines an infinite cycle of oscillating truth values. We cannot say that it is true or false. It is undecidable.
In Newcomb’s paradox, something similar is going on, but with choices instead of truth values. You should choose both envelopes if you will not choose both envelopes. In Newcomb’s paradox, the undecidable bit is a choice instead of a truth value, but it has the same structure: a self-referential loop and a negation.
Newcomb’s paradox is more than just a self-referential paradox. It also involves a conflict between free will and determinism. From your subjective perspective, you are free to choose one or both envelopes. From the perspective of the professor and the model, however, your choice is already determined, because it is predictable with a very high level of accuracy. In my version of Newcomb’s paradox, not only is your choice predetermined, it is sitting on the table in front of you.
Although Newcomb’s paradox depends on the rather implausible premise of a computer model that can accurately predict your choices, it isn’t entirely divorced from reality. Predicting other people’s choices is something we often do. It is part of many games, such as chess. It is an important part of war. It occurs in social interactions such as asking your boss for a raise or asking a girl on a date (you should only ask if you expect the answer to be “yes”). We all have models in our heads that predict how other people make choices.
When my youngest daughter was 7 years old, she became obsessed with the game “rock, paper, scissors”. We played it together a lot. After a while, I got quite good at guessing her moves. I was never perfect at it, but I could predict her moves with something like 60% accuracy, so I would win about 60% of the time. Since 1/3 of the possible outcomes are ties, that’s much better than luck. If the outcomes were random, you’d win a third, lose a third, and tie a third.
My daughter was both annoyed and fascinated by my skill at this game. She wanted to learn my “trick”, which I claimed was just magic. It puzzled me too. I don’t know how I was predicting her moves, but I could do it. Somehow, my brain had developed a pretty accurate model of her choices. I can’t do it any more, maybe because she uses a different method to select her moves, or maybe because she is better at concealing whatever cues I was using to make my guesses.
Back to the thought experiment. What is the solution to Newcomb’s paradox? What should you do?
The rational choice is to take both envelopes. Your choice can’t retroactively change what is in the envelopes.
However, I would make the choice in a different way. I would smile at the professor, take a coin out of my pocket, and say:
“I’ll let fate decide. I’m going to flip this coin. If it lands heads, I will take both A and B. If it lands tails, I will only take A.”