Beliefs about the end of humanity: How bad, likely, and important is human extinction?

Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford), Joshua Lewis (New York University) and Geoffrey Goodwin (University of Pennsylvania)

GPI Working Paper No. 1-2024

Human extinction would mean the end of humanity’s achievements, culture, and future potential. According to some ethical views, this would be a terrible outcome. But how do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across five empirical studies (N = 2,147; U.S. and China) we find that people consider extinction prevention a societal priority and deserving of greatly increased societal resources. However, despite estimating the likelihood of human extinction to be 5% this century (U.S. median), people believe that the chances would need to be around 30% for it to be the very highest priority. In line with this, people consider extinction prevention to be only one among several important societal issues. People’s judgments about the relative importance of extinction prevention appear relatively fixed and hard to change by reason-based interventions.

Other working papers

Aggregating Small Risks of Serious Harms – Tomi Francis (Global Priorities Institute, University of Oxford)

According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent…

What power-seeking theorems do not show – David Thorstad (Vanderbilt University)

Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.

How effective is (more) money? Randomizing unconditional cash transfer amounts in the US – Ania Jaroszewicz (University of California San Diego), Oliver P. Hauser (University of Exeter), Jon M. Jachimowicz (Harvard Business School) and Julian Jamison (University of Oxford and University of Exeter)

We randomized 5,243 Americans in poverty to receive a one-time unconditional cash transfer (UCT) of $2,000 (two months’ worth of total household income for the median participant), $500 (half a month’s income), or nothing. We measured the effects of the UCTs on participants’ financial well-being, psychological well-being, cognitive capacity, and physical health through surveys administered one week, six weeks, and 15 weeks later. While bank data show that both UCTs increased expenditures, we find no evidence that…