Strong longtermism and the challenge from anti-aggregative moral views
Karri Heikkinen (University College London)
GPI Working Paper No. 5 - 2022
Greaves and MacAskill (2019) argue for strong longtermism, according to which, in a wide class of decision situations, the option that is ex ante best, and the one we ex ante ought to choose, is the option that makes the very long-run future go best. One important aspect of their argument is the claim that strong longtermism is compatible with a wide range of ethical assumptions, including plausible non-consequentialist views. In this essay, I challenge this claim. I argue that strong longtermism is incompatible with a range of non-aggregative and partially aggregative moral views. Furthermore, I argue that the conflict between these views and strong longtermism is so deep that those in favour of strong longtermism are better off arguing against them, rather than trying to modify their own view. The upshot of this discussion is that strong longtermism is not as robust to plausible variations in underlying ethical assumptions as Greaves and MacAskill claim. In particular, the stand we take on interpersonal aggregation has important implications on whether making the future go as well as possible should be a global priority.
Other working papers
The weight of suffering – Andreas Mogensen (Global Priorities Institute, University of Oxford)
How should we weigh suffering against happiness? This paper highlights the existence of an argument from intuitively plausible axiological principles to the striking conclusion that in comparing different populations, there exists some depth of suffering that cannot be compensated for by any measure of well-being. In addition to a number of structural principles, the argument relies on two key premises. The first is the contrary of the so-called Reverse Repugnant Conclusion…
Non-additive axiologies in large worlds – Christian Tarsney and Teruji Thomas (Global Priorities Institute, Oxford University)
Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say ‘yes’, but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say ‘no’…
Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)
The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …