Summary: Doomsday rings twice

This is a summary of the GPI Working Paper “Doomsday rings twice” by Andreas Mogensen. The summary was written by Riley Harris.

We live at a time of rapid technological development, and how we handle the most powerful emerging technologies could determine the fate of humanity. Indeed, our ability to prevent such technologies from causing extinction could put us amongst the most important people in history. In “Doomsday rings twice”, Andreas Mogensen illustrates how our potential importance could be evidence for humanity’s near-term extinction. This evidence indicates that either extinction would be very likely or that we cannot make a difference to extinction risk.

Responding to evidence

Consider how we respond to new evidence. When you see two people holding hands, you would consider them more likely to be in a relationship than you did previously. Although people sometimes hold hands when they are not in a relationship, you are more likely to observe hand-holding if they are. In general, you should update towards hypotheses that make your observations more likely. Equivalently, seeing two random people holding hands is surprising, but seeing a couple hold hands is mundane. We should update towards hypotheses that would be less surprising given our evidence.

Importance of current people as evidence for near-term extinction

Many have argued that we live in a time of perils.1 Humanity has only just come into immense power––we could destroy ourselves with nuclear bombs or bioweapons. The current generation is incredibly important because we are uniquely placed to prevent human extinction. And it turns out our importance would be evidence for extinction over survival. To see why, consider how surprising our importance would be if we survived. Humanity has enormous potential––the future could contain trillions of people if we survive. If we were amongst the most important of these trillions of potential people, that would be incredibly surprising. It would be less surprising if we were the most important members of a smaller group––and we would be if humanity went extinct in the near term. Evidence favours the hypothesis which would make our observations less surprising, so our importance is evidence for near-term extinction.

Mogensen’s argument is closely related to the ‘original’ doomsday argument, which uses the fact that we are early rather than the fact that we seem unusually influential––see Leslie (1998) and Carter and McCrea (1983). However, Mogensen considers the original argument to have a flaw that his own version does not inherit. The problem with the argument is that the evidence it gives us is usually balanced out by another piece of evidence. The “surprisingly early” fact is entailed by the principle that you should reason as if you are randomly selected from observers in your reference class (see Bostrom, 2002). When you observe that you are surprisingly early, it is particularly surprising if humanity survives for eons. However, this principle is always exactly counterbalanced by another principle, that you ought to assign initially higher credence to worlds with many observers. The intuition is that we would be incredibly lucky to be amongst the only observers in the universe––so you must assign higher probability to the hypothesis according to which your reference class is bigger (see Dieks, 1992). Mogensen notes that another possible objection is that the argument depends on a particular choice of reference class, but he thinks that this is a more general problem rather than a specific objection to this argument (see Goodman, 1983 and Hajek, 2007). However, our apparent importance is an additional piece of evidence that really can change the chance of extinction.

How strong is this evidence?

Mogensen argues the evidence is strong enough to command a dramatic shift in our beliefs. No matter what you thought before, our importance should be evidence that extinction looms larger than previously suspected. We don’t have enough information to know exactly how strong this evidence is, but we can approximate the strength of the evidence through some reasonable estimates. 

From the dawn of time till the end of the world, how many people will there be? There have been about 60 billion so far.2  For simplicity, suppose that if we go extinct soon, there will be 100 billion in total. If we survive, there might be trillions more––suppose 100 trillion, for concreteness. Finally, suppose that regardless of whether or not we prevent extinction, we are amongst the most important 10 billion people to ever live.3

How much more surprising would our survival be given these estimates? Our position amongst the 10 billion most important people would only be a little surprising amongst 100 billion people––we would occupy the top 10%. However, our position would be extraordinarily surprising if humanity survived. A total of 100 trillion people would place us in the most important 0.01%.4 This means, we are about 1,000 times more likely to be in the most important 10% of people than the most important 0.01%.

This is strong evidence––strong enough that it would warrant the dramatic shift from a 5% chance to a 98% chance of extinction. We can explain this using the concept of the “odds ratio”, which is used in betting and statistics, and gives an easy way to update on new evidence. A 5% chance is equivalent to the odds ratio being 1 to 19. We can think of 1 to 19 odds as saying that there are 20 total possibilities (19+1) of which only one involves extinction (so there is a 1/20 or 5% chance of extinction), and 19 involve us surviving the next few centuries (19/20 chance). Our position amongst the 10 billion most important people would be 1,000 times as surprising if we survived the next few hundred years. This moves the odds ratio from 1 to 19 all the way up to 1,000 to 19. That is, of 1,019 total possibilities, 1,000 of them involve extinction. In other words, a 98% chance of extinction. The broader point is that––whatever your initial beliefs––our importance provides very strong evidence for extinction. 

Conclusion

Mogensen’s argument presents our position of relative importance as evidence that we are likely to go extinct in the short-term. This could be because the risk of extinction is higher than previously thought, or because we cannot affect the current extinction risk. If the overall risk is high because we cannot affect it, then we are not living at a time of unique risk––and we are not particularly important. Mogensen thinks that you could take the evidence either way––though he slightly favours the conclusion that we are not so important after all.

Footnotes

1 See Sagan (1994), Leslie (1998) and Parfit (2011).

2 Other sources give different estimates. It is likely that 60 million is lower than the true number, but this doesn’t affect the end result. If we expected there to be around 200 million conditional on near-term extinction then this argument would imply that we end up with a ~96% chance of extinction, rather than an ~98% chance. These numbers represent the fact that this argument powerfully pushes up the chances of near-term extinction, but the exact number should be taken to be approximate.

3 It makes no difference to the argument whether you think we’re in the most important 1 billion, 100 million, or something else compatible with our seemingly incredible importance.

4 Surviving puts us in the most important 10 billion/100 trillion=1010/1014=0.01% of people.

References

Nick Bostrom (2002). Anthropic bias: observation selection effects in science and philosophy. Routledge. 

Brandon Carter and William H. McCrea (1983). The Anthropic Principle and its implications for biological evolution. Philosophical Transactions of the Royal Society. A 310. 

Dennis Dieks (1992). Doomsday – or: the danger of statistics. Philosophical Quarterly 42/166.

Nelson Goodman (1983). Fact, fiction, and forecast. Harvard University Press.

Alan Hájek (2007). The reference class problem is your problem too. Synthese 156. 

John Leslie (1998). The end of the world: the science and ethics of human extinction. Routledge. 

Derek Parfit (2011). On What Matters. Oxford University Press.

Carl Sagan (1994). Pale Blue Dot: A Vision of the Human Future in Space. Random House.

Other paper summaries

Summary: In defence of fanaticism (Hayden Wilkinson)

Suppose you are choosing where to donate £1,500. One charity will distribute mosquito nets that cheaply and effectively prevent malaria, in all likelihood your donation will save a life. Another charity aims to create computer simulations of brains which could allow morally valuable life to continue indefinitely far into the future. They would be the first to admit that their project is very…

Summary: The Case for Strong Longtermism (Hilary Greaves and William MacAskill)

In this paper, Greaves and MacAskill make the case for strong longtermism: the view that the most important feature of our actions today is their impact on the far future. They claim that strong longtermism is of the utmost significance: that if the view were widely adopted, much of what we prioritise would change. …