Beliefs about the end of humanity: How bad, likely, and important is human extinction?
Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford), Joshua Lewis (New York University) and Geoffrey Goodwin (University of Pennsylvania)
GPI Working Paper No. 1-2024
Human extinction would mean the end of humanity’s achievements, culture, and future potential. According to some ethical views, this would be a terrible outcome. But how do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across five empirical studies (N = 2,147; U.S. and China) we find that people consider extinction prevention a societal priority and deserving of greatly increased societal resources. However, despite estimating the likelihood of human extinction to be 5% this century (U.S. median), people believe that the chances would need to be around 30% for it to be the very highest priority. In line with this, people consider extinction prevention to be only one among several important societal issues. People’s judgments about the relative importance of extinction prevention appear relatively fixed and hard to change by reason-based interventions.
Other working papers
Aggregating Small Risks of Serious Harms – Tomi Francis (Global Priorities Institute, University of Oxford)
According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent…
Existential Risk and Growth – Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford) and Leopold Aschenbrenner
Technologies may pose existential risks to civilization. Though accelerating technological development may increase the risk of anthropogenic existential catastrophe per period in the short run, two considerations suggest that a sector-neutral acceleration decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, since a richer society is willing to sacrifice more for safety, optimal policy can yield an “existential risk Kuznets curve”; acceleration…
Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)
Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.