Is Existential Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models

Gustav Alexandrie (Global Priorities Institute, University of Oxford) and Maya Eden (Brandeis University)

GPI Working Paper No. 5-2023

What socially beneficial causes should philanthropists prioritize if they give equal ethical weight to the welfare of current and future generations? Many have argued that, because human extinction would result in a permanent loss of all future generations, extinction risk mitigation should be the top priority given this impartial stance. Using standard models of population dynamics, we challenge this conclusion. We first introduce a theoretical framework for quantifying undiscounted cost-effectiveness over the long term. We then show that standard population models imply that there are interventions other than extinction risk mitigation that can produce persistent social benefits. In fact, these social benefits are large enough to render the associated interventions at least as cost-effective as extinction risk mitigation.

Other working papers

The scope of longtermism – David Thorstad (Global Priorities Institute, University of Oxford)

Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of…

Existential Risk and Growth – Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford) and Leopold Aschenbrenner

Technologies may pose existential risks to civilization. Though accelerating technological development may increase the risk of anthropogenic existential catastrophe per period in the short run, two considerations suggest that a sector-neutral acceleration decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, since a richer society is willing to sacrifice more for safety, optimal policy can yield an “existential risk Kuznets curve”; acceleration…

Doomsday and objective chance – Teruji Thomas (Global Priorities Institute, Oxford University)

Lewis’s Principal Principle says that one should usually align one’s credences with the known chances. In this paper I develop a version of the Principal Principle that deals well with some exceptional cases related to the distinction between metaphysical and epistemic modal­ity. I explain how this principle gives a unified account of the Sleeping Beauty problem and chance-­based principles of anthropic reasoning…