Beliefs about the end of humanity: How bad, likely, and important is human extinction?

Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford), Joshua Lewis (New York University) and Geoffrey Goodwin (University of Pennsylvania)

GPI Working Paper No. 1-2024

Human extinction would mean the end of humanity’s achievements, culture, and future potential. According to some ethical views, this would be a terrible outcome. But how do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across five empirical studies (N = 2,147; U.S. and China) we find that people consider extinction prevention a societal priority and deserving of greatly increased societal resources. However, despite estimating the likelihood of human extinction to be 5% this century (U.S. median), people believe that the chances would need to be around 30% for it to be the very highest priority. In line with this, people consider extinction prevention to be only one among several important societal issues. People’s judgments about the relative importance of extinction prevention appear relatively fixed and hard to change by reason-based interventions.

Other working papers

Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)

The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …

What power-seeking theorems do not show – David Thorstad (Vanderbilt University)

Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.

High risk, low reward: A challenge to the astronomical value of existential risk mitigation – David Thorstad (Global Priorities Institute, University of Oxford)

Many philosophers defend two claims: the astronomical value thesis that it is astronomically important to mitigate existential risks to humanity, and existential risk pessimism, the claim that humanity faces high levels of existential risk. It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true. Across a range of assumptions, existential risk pessimism significantly reduces the value of existential risk mitigation…