Three mistakes in the moral mathematics of existential risk

David Thorstad (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 7-2023, forthcoming in Ethics

Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk: the importance of treating existential risk as an intergenerational coordination problem; a surprising dialectical flip in the relevance of background risk levels to the case for existential risk mitigation; renewed importance of population dynamics, including the dynamics of digital minds; and a novel form of the cluelessness challenge to longtermism.

Other working papers

Existential Risk and Growth – Leopold Aschenbrenner and Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford)

Technology increases consumption but can create or mitigate existential risk to human civilization. Though accelerating technological development may increase the hazard rate (the risk of existential catastrophe per period) in the short run, two considerations suggest that acceleration typically decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, given a policy option to sacrifice consumption for safety, acceleration motivates greater sacrifices…

Against Anti-Fanaticism – Christian Tarsney (Population Wellbeing Initiative, University of Texas at Austin)

Should you be willing to forego any sure good for a tiny probability of a vastly greater good? Fanatics say you should, anti-fanatics say you should not. Anti-fanaticism has great intuitive appeal. But, I argue, these intuitions are untenable, because satisfying them in their full generality is incompatible with three very plausible principles: acyclicity, a minimal dominance principle, and the principle that any outcome can be made better or worse. This argument against anti-fanaticism can be…

Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)

Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…