Longtermism in an Infinite World

Christian J. Tarsney (Population Wellbeing Initiative, University of Texas at Austin) and Hayden Wilkinson (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 4-2023, forthcoming in Essays on Longtermism

The case for longtermism depends on the vast potential scale of the future. But that same vastness also threatens to undermine the case for longtermism: If the future contains infinite value, then many theories of value that support longtermism (e.g., risk-neutral total utilitarianism) seem to imply that no available action is better than any other. And some strategies for avoiding this conclusion (e.g., exponential time discounting) yield views that are much less supportive of longtermism. This chapter explores how the potential infinitude of the future affects the case for longtermism. We argue that (i) there are reasonable prospects for extending risk- neutral totalism and similar views to infinite contexts and (ii) many such extension strategies still support standard arguments for longtermism, since they imply that when we can only affect (or only predictably affect) a finite part of an infinite universe, we can reason as if only that finite part existed. On the other hand, (iii) there are improbable but not impossible physical scenarios in which our actions can have infinite predictable effects on the far future, and these scenarios create substantial unresolved problems for both infinite ethics and the case for longtermism.

Other working papers

The unexpected value of the future – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Various philosophers accept moral views that are impartial, additive, and risk-neutral with respect to betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire reductio ad absurdum. If the expected sum of value in humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution—then those views say that no real-world option is ever better than any other. And, as I argue…

Tough enough? Robust satisficing as a decision norm for long-term policy analysis – Andreas Mogensen and David Thorstad (Global Priorities Institute, Oxford University)

This paper aims to open a dialogue between philosophers working in decision theory and operations researchers and engineers whose research addresses the topic of decision making under deep uncertainty. Specifically, we assess the recommendation to follow a norm of robust satisficing when making decisions under deep uncertainty in the context of decision analyses that rely on the tools of Robust Decision Making developed by Robert Lempert and colleagues at RAND …

Existential risk and growth – Leopold Aschenbrenner (Columbia University)

Human activity can create or mitigate risks of catastrophes, such as nuclear war, climate change, pandemics, or artificial intelligence run amok. These could even imperil the survival of human civilization. What is the relationship between economic growth and such existential risks? In a model of directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. …