Beliefs about the end of humanity: How bad, likely, and important is human extinction?

Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford), Joshua Lewis (New York University) and Geoffrey Goodwin (University of Pennsylvania)

GPI Working Paper No. 1-2024

Human extinction would mean the end of humanity’s achievements, culture, and future potential. According to some ethical views, this would be a terrible outcome. But how do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across five empirical studies (N = 2,147; U.S. and China) we find that people consider extinction prevention a societal priority and deserving of greatly increased societal resources. However, despite estimating the likelihood of human extinction to be 5% this century (U.S. median), people believe that the chances would need to be around 30% for it to be the very highest priority. In line with this, people consider extinction prevention to be only one among several important societal issues. People’s judgments about the relative importance of extinction prevention appear relatively fixed and hard to change by reason-based interventions.

Other working papers

Towards shutdownable agents via stochastic choice – Elliott Thornley (Global Priorities Institute, University of Oxford), Alexander Roman (New College of Florida), Christos Ziakas (Independent), Leyton Ho (Brown University), and Louis Thomson (University of Oxford)

Some worry that advanced artificial agents may resist being shut down. The Incomplete Preferences Proposal (IPP) is an idea for ensuring that doesn’t happen. A key part of the IPP is using a novel ‘Discounted REward for Same-Length Trajectories (DREST)’ reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be ‘USEFUL’), and (2) choose stochastically between different trajectory-lengths (be ‘NEUTRAL’ about trajectory-lengths). In this paper, we propose evaluation metrics…

Maximal cluelessness – Andreas Mogensen (Global Priorities Institute, Oxford University)

I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance…

Simulation expectation – Teruji Thomas (Global Priorities Institute, University of Oxford)

I present a new argument for the claim that I’m much more likely to be a person living in a computer simulation than a person living in the ground-level of reality. I consider whether this argument can be blocked by an externalist view of what my evidence supports, and I urge caution against the easy assumption that actually finding lots of simulations would increase the odds that I myself am in one.