Summary: A Paradox for Tiny Probabilities and Enormous Values
This is a summary of the GPI Working Paper "A Paradox for Tiny Probabilities and Enormous Values" by Nick Beckstead and Teruji Thomas. The summary was written by Tomi Francis.
Many decisions in life involve balancing risks with their potential payoffs. Sometimes, the risks are small: you might be killed by a car while walking to the shops, but it would be unreasonably timid to sit at home and run out of toilet paper in order to avoid this risk. Other times, the risks are overwhelmingly large: your lottery ticket might win tomorrow, but it would be reckless to borrow £20,000 from a loan shark named “Killer Clive” in order to pay the 50% upfront cost of tomorrow’s massive celebratory party (even if you really, really, really enjoy massive parties). The correct decision theory should tell us to be neither too timid nor too reckless, right? Wrong.
In “A Paradox for Tiny Probabilities and Enormous Values”, Nick Beckstead and Teruji Thomas argue that every decision theory is either “timid”, in the sense that it sometimes tells us to avoid small-probability risks no matter the size of the potential payoff, or “reckless”, in the sense that it sometimes tells us to accept almost-certain disaster in return for a small probability of getting a large enough payoff.
Beckstead and Thomas’s central case goes like this. Imagine that you have one year left to live, but you can swap your year of life for a ticket that will, with probability 0.999, give you ten years of life, but which will otherwise kill you immediately with probability 0.001. You can also take multiple tickets: two tickets will give a probability 0.9992 of getting 100 years of life, otherwise death, three tickets will give a probability 0.9993 of getting 1000 years of life, otherwise death, and so on. It seems objectionably timid to say that taking n+1 tickets is not better than taking n tickets. Given that we don’t want to be timid, we should say that taking one ticket is better than taking none; two is better than one; three is better than two; and so on.
We now need to introduce a crucial assumption. According to transitivity, if X is better than Y and Y is better than Z, X must be better than Z. So, if one ticket is better than none, and two tickets is better than one, then two tickets is better than none. Since three tickets is better than two, and two tickets is better than none, three tickets is also better than none. Carrying on in this way, it can be shown that for any n, taking n tickets is better than taking none. But that seems objectionably reckless: taking, say, 50,000 tickets will result in almost certain death.* We thus have a paradox: it seems unreasonable to be timid and it also seems unreasonable to be reckless, but given transitivity, if we’re not timid, we have to be reckless.
It’s worth flagging that there are some philosophers who think we should reject transitivity in light of puzzles like these. But that’s a pretty unattractive option, at least on the face of it: if the deals keep getting better, how could the last one fail to be better than the first one? Beckstead and Thomas don’t much discuss the rejection of transitivity, and I shall follow them in setting it aside. Having done so, we have to accept either timidity or recklessness: Beckstead and Thomas’s argument leaves us no choice.
How bad would it be to be timid? You might think that it’s not so obvious that it’s always better to live ten times as long, with a very slightly smaller probability, rather than to live a tenth as long with a slightly larger probability. If we only had to accept something like this in order to avoid recklessness, things wouldn’t be too bad. But unfortunately, Beckstead and Thomas’s paradox doesn’t just apply to cases where we’re deciding what’s best for ourselves. It also applies to cases where we’re choosing on behalf of others: that is, it applies to moral decision-making. And timidity for moral decision-making is really hard to accept.
To see why, let’s modify the central example a little. Imagine that each successive ticket, rather than allowing us to live for ten times as long, allows us to save ten times as many lives. Taking an additional ticket then results in a very slightly smaller probability of saving many more lives. Indeed, each time you take another ticket, the expected number of lives saved becomes almost ten times greater. So it seems pretty clearly morally better to take another ticket, no matter how many tickets you’ve taken already.
Brute intuitions like these aside, Beckstead and Thomas show that timid theories face a number of other objections. For example, if you’re “timid” about saving lives, then your moral decision-making is going to depend in strange ways on events in distant regions of the universe, over which you have no control.
Let’s see how that works. Suppose that you have two choices, A and B. A gives a slightly higher probability of saving n lives, while B gives a slightly lower probability of saving N + n lives, where N is much larger than n. These two choices are summarised in the table below:
Probabilities | |||
p | q | 1 - p - q | |
A | n lives saved | n lives saved | 0 lives saved |
B | N + n lives saved | 0 lives saved | 0 lives saved |
If you’re timid about saving lives, you think that it’s better to save some large number of lives n with some probability p + q, rather than saving any larger number of lives N + n with a slightly smaller probability p. So you’re going to have to say that A is better than B in some version of this case.
Now, here’s an interesting fact about this case. No matter what you do, at least n lives will be saved with probability p. If we imagine that these n lives correspond to the same people either way, then whether or not these people are saved has nothing to do with your choice between A and B. It could be that these people are located in some distant galaxy, and that the reason there is a probability p that they will be saved is that somebody else in that distant galaxy chose to attempt to save them. Now consider the following case, which is just like the preceding one, but where the distant agent chose not to attempt to save these lives:
Probabilities | |||
p | q | 1 - p - q | |
A’ | 0 lives saved | n lives saved | 0 lives saved |
B’ | N lives saved | 0 lives saved | 0 lives saved |
If N is greater than n and p is greater than q, B’ is obviously better than A’ from a moral perspective: it gives a greater chance of saving a greater number of lives. But hold on: timidity told us that A is better than B. We’re saying that, and we’re also saying that B’ is better than A’, even though the two pairs of cases differ only in the matter of whether some other agent in a distant galaxy decided to attempt to save n lives. That’s very strange, and very hard to believe!
Is accepting recklessness more palatable than accepting timidity? It might seem so at first: while recklessness is somewhat counter-intuitive, at least there seems to be a consistent way of being reckless: just maximise the expectation of the thing you think is good, like the number of lives saved, or the number of years of life you’re going to enjoy. But even if we can swallow almost-certain death, Beckstead and Thomas point out that reckless decision theories face further challenges in infinite cases: they obsess over infinities in a troubling way, and they have trouble dealing with certain infinite gambles. To make it easy to see how the two challenges arise, we’ll consider a reckless agent who wants to maximise the expected number of years of good life they shall have in finite cases. (This will make the problems easier to see, but they generalise to all reckless decision-makers.)
First, infinity obsession. For any small probability p, a reckless agent will think that rather than having n years of good life for certain, it would be better to get probability p of N years of good life, otherwise certain death, provided N is sufficiently large. Infinitely many years of good life is presumably better than any finite number of years, and so you should likewise prefer any probability, no matter how small, of getting infinitely many years of good life, to the certainty of any finite number of years of good life. In other words, it seems that reckless decision makers will obsessively pursue infinite amounts of value, no matter how unlikely it is that their pursuit will yield anything at all.
Reckless decision-makers face another kind of problem when it comes to the “St. Petersburg” gamble. In this gamble, a fair coin is flipped until it lands tails, no matter how long it takes. The player then gets a payoff of 2n life years, where n is the number of times the coin landed heads. The payoffs are illustrated by the table below.
Number of heads | 0 | 1 | 2 | 3 | 4 | … | n | … |
Life years received | 1 | 2 | 4 | 8 | 16 | … | 2n | … |
Compared to getting n years of good life for certain, it would be better to instead get a truncated version of the St. Petersburg gamble which ends prematurely if the first n + 1 flips land heads, since this will yield an expected n + 1 years of good life. Clearly, any truncated St. Petersburg gamble is worse than the un-truncated gamble: it’s better to have the small chance of continuing after n + 1 flips than it is to not have this chance. So, for reckless decision-makers, it must be better to get the St. Petersburg gamble than to get any finite number of years of good life for certain. This is especially odd in that it means that it must be better to get the St. Petersburg gamble than to get any of its possible outcomes. That’s another kind of paradox.
To summarise, Beckstead and Thomas show that we’ve got some hard choices to make when it comes to decision-making under risk. Perhaps we’ve sometimes got to be very timid, in which case we also need to think that what we should do sometimes depends in strange ways on parts of the world we can’t affect at all. Or, we’ve sometimes got to be very reckless, and obsess over infinities. Or we’ve got to deny transitivity: we’ve got to believe that A can be better than B, and B can be better than C, without A being better than C.
None of these options look good. But then nobody said decision theory was going to be easy.
References
Beckstead, N. and Thomas, T. (2021), A paradox for tiny probabilities and enormous values. GPI Working Paper No. 7–2021.
* The probability of survival if 50,000 tickets are taken is 0.99950,000, which is less than 10-21.
Other paper summaries
Summary: When should an effective altruist donate? (William MacAskill)
Effective altruists seek to do as much good as possible given limited resources. Often by donating to important causes like global health and poverty, farmed animal welfare, and reducing existential risks. Can we help more by donating now or later? This is the thorny question William MacAskill tackles in the paper “When should an effective altruist donate?”. He explores several considerations…
Summary: High risk, low reward: A challenge to the astronomical value of existential risk mitigation (David Thorstad)
The value of the future may be vast. Human extinction, which would destroy that potential, would be extremely bad. Some argue that making such a catastrophe just a little less likely would be by far the best use of our limited resources––much more important than, for example, tackling poverty, inequality, global health or racial injustice. In “High risk, low reward: A challenge to the astronomical…
Summary: Do not go gentle: why the Asymmetry does not support anti-natalism (Andreas Mogensen)
Many people believe that it makes the world worse to create miserable lives, but that it doesn’t make the world better to create happy lives. This is one way of expressing “the Asymmetry” in population ethics. If we go on creating new people, many will be happy, but some will be unhappy. If we accept the Asymmetry, the continued existence of humanity therefore involves…