Beliefs about the end of humanity: How bad, likely, and important is human extinction?
Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford), Joshua Lewis (New York University) and Geoffrey Goodwin (University of Pennsylvania)
GPI Working Paper No. 1-2024
Human extinction would mean the end of humanity’s achievements, culture, and future potential. According to some ethical views, this would be a terrible outcome. But how do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across five empirical studies (N = 2,147; U.S. and China) we find that people consider extinction prevention a societal priority and deserving of greatly increased societal resources. However, despite estimating the likelihood of human extinction to be 5% this century (U.S. median), people believe that the chances would need to be around 30% for it to be the very highest priority. In line with this, people consider extinction prevention to be only one among several important societal issues. People’s judgments about the relative importance of extinction prevention appear relatively fixed and hard to change by reason-based interventions.
Other working papers
Quadratic Funding with Incomplete Information – Luis M. V. Freitas (Global Priorities Institute, University of Oxford) and Wilfredo L. Maldonado (University of Sao Paulo)
Quadratic funding is a public good provision mechanism that satisfies desirable theoretical properties, such as efficiency under complete information, and has been gaining popularity in practical applications. We evaluate this mechanism in a setting of incomplete information regarding individual preferences, and show that this result only holds under knife-edge conditions. We also estimate the inefficiency of the mechanism in a variety of settings and show, in particular, that inefficiency increases…
Tiny probabilities and the value of the far future – Petra Kosonen (Population Wellbeing Initiative, University of Texas at Austin)
Morally speaking, what matters the most is the far future – at least according to Longtermism. The reason why the far future is of utmost importance is that our acts’ expected influence on the value of the world is mainly determined by their consequences in the far future. The case for Longtermism is straightforward: Given the enormous number of people who might exist in the far future, even a tiny probability of affecting how the far future goes outweighs the importance of our acts’ consequences…
Aggregating Small Risks of Serious Harms – Tomi Francis (Global Priorities Institute, University of Oxford)
According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent…