The Yankees Decision Problem
This scenario is supposed to counter the Why ain’cha rich? argument for EDT by showing that causalists can predictably out-earn evidentialists when faced with the right sort of game.
In Arntzenius’s original statement, the Red Sox and the Yankees are going to play a long series of games against each other, and it’s known that the Yankees have a 90% chance of winning each game. The (apparently ill-informed) oddsmakers are offering two bets on the games: a bet that pays out $1 if the Yankees win and loses $2 if the Red Sox win, and a bet that pays out $2 if the Red Sox win and loses $1 if the Yankees win. You must choose one of these bets. The catch is that before you place your bet, the Sibyl will tell you whether or not you will win money on the bet.
CDT bets on the Yankees every game because the Sybil’s prediction has no causal bearing on the outcome of the game, and the Yankees bet is clearly underpriced. EDT bets on the Red Sox instead, no matter what the Sibyl says. This is because if you’re guaranteed to win, you’ll win twice as much by betting on the Sox, and if you’re guaranteed to lose, you’ll lose only half as much by betting on the Sox. One hundred games later, CDT is sitting on a $70 expected gain while EDT is stinging from a $70 expected loss. Then the causalist gets to throw the evidentialist’s old taunt back at them: If you’re so smart, why ain’cha rich?
I’m uncertain whether this is a valid decision problem. Ahmed and Price object that the bettor in the Yankees scenario doesn’t face a free choice under the Deliberation Crowds out Prediction (DCOP) thesis:
The authority that an agent takes herself to have—qua agent—over her own future actions, seems inevitably to ‘trump’ whatever considerations might otherwise have formed the basis for a justified prediction (probabilistic or otherwise) about what she will choose to do.
In other words, if you’re actually making a free choice, you can’t have reliable evidence about how you will choose, but the gambler in Yankees does have such evidence. If he knows that he will win his bet and he knows that the Yankees have a 90% chance of winning, he must know something about his own probability of betting on the Yankees.
As a reality check, we should ask why the DCOP thesis doesn’t also render Newcomb’s problem illegitimate. The answer is that even though Newcomb’s Demon has information about what the chooser will do, that information is totally hidden from them (inside an opaque box) until after they’ve made their free choice. This defense doesn’t seem to apply to the clear-box Newcomb problem, though. If I know that the predictor is 99% accurate, and I walk into the room and see $1M in the variable box, I now have very strong evidence that I will one-box. Ahmed and Price say in a footnote (pg 22) that they therefore consider the clear-box Newcomb problem as invalid as Yankees.
I agree with Ahmed and Price that there’s something fishy about the way the Sibyl is supposed to work in Yankees, but I don’t think the problem is that her knowledge somehow “crowds out” your free choice. Rather, the problem is that the Sibyl can’t give non-contradictory predictions to all agents. Suppose, for example, that you follow the policy of betting on the Red Sox if you are predicted to lose and betting on the Yankees if you are predicted to win. Meacham points out that there’s no way for the Sibyl to make her predictions consistently, because no matter what she tells you, the Red Sox have to win every game of an indefinitely long series to make her predictions come true. But this violates the assumption that the Red Sox have only a 10% chance of winning each game.
Now, it turns out that causalists and evidentialists all follow decision rules that admit of non-contradictory predictions because their gambling behavior doesn’t change in response to the content of the Sibyl’s predictions. Still, it seems like a serious defect of Yankees that the problem simply breaks when you pose it to some agents. Newcomb’s problem does not suffer from a symmetric defect. You can follow any decision procedure you like, no matter how bizarre or trollish, and the problem will still be logically consistent. Compare this to the situation you face in Yankees: either you’re the sort of agent who gambles the same no matter what the Sibyl tells you or else you’ve been misled about the basic setup of the game you’re playing. Seems highly suspicious. (Thank you to Alex Kastner for talking through this with me.)
And even supposing that Yankees is a coherent problem, I still don’t think it’s an adequate rebuttal to Why ain’cha rich? We care about the NP because it’s analogous to decision scenarios that arise in real life non-trivially often. We often have to decide whether to cooperate under (weak) decision-entanglement, whether to have integrity, and so on. The Why ain’cha rich? argument has bite because causalists actually will end up slightly poorer than evidentialists, even when they aren’t playing contrived games against superhuman opponents. Do we face decision problems analogous to Yankees often enough to cancel this effect out? I rather doubt that we do.
What does CDT actually recommend? #
Arntzenius, Ahmed, Price, and Meacham all agree that causalists bet on the Yankees for the reason I gave above: the Sibyl’s prediction is causally irrelevant to the outcome of the game, so CDT should ignore it. Caspar Oesterheld pointed out to me that there’s something a bit funny about this argument. Suppose Mary the causal decision theorist reads Arntzenius’s article and resolves to bet on the Yankees. Then the Sibyl tells her that she’s going to lose her bet. If Mary has high confidence that she’s going to bet on the Yankees and high confidence in the Sibyl’s accuracy, she must have high confidence that the Red Sox are going to win. But then the action that will cause the best outcome is surely to bet on the Red Sox. And symmetrically, if Mary has high confidence that she’s going to bet on the Red Sox, and the Sibyl predicts that she’ll lose her bet, CDT recommends that she bet on the Yankees instead.
The upshot is that when the bettor is predicted to lose, Yankees becomes a version of Death in Damascus where Death’s victim has some small preference for one city over the other whether he lives or dies. (1)See box 9.1 in Peterson’s Intro to DT Evidentialists accept that they’re equally likely to die no matter where they go and stay in Damascus to avoid the minor discomfort of journeying to Aleppo. Causalists succumb to decision instability and can’t settle on any act unless randomization is available.
Questions #
- Are there any common real life scenarios with the same structure as Yankees?
- If an evidentialist is given the chance to self-modify before facing Yankees, should they bind themselves to bet on the Yankees just as causalists should bind themselves to one-box?
- Can Yankees be salvaged if we declare by fiat that only pure EDT and CDT agents are allowed to play? Analogy: Newcomb’s problem breaks if the agent’s decision rule outputs an illegal move (eg, take only the clear box), but this isn’t a problem because such agents are banned from playing.
Reading list #
- Arntzenius, “No Regrets” ✔
- Ahmed and Price criticizing Arntzenius ✔
- Wells, Equal Opportunity and Newcomb’s Problem, presenting another decision problem where causalists supposedly end up richer than evidentialists ✔
- Ahmed, Equal Opportunities in Newcomb’s Problem and Elsewhere, responding to Wells
Last updated 29 August 2024