Strong longtermism and the challenge from anti-aggregative moral views

Karri Heikkinen (University College London)

GPI Working Paper No. 5 - 2022

Greaves and MacAskill (2019) argue for strong longtermism, according to which, in a wide class of decision situations, the option that is ex ante best, and the one we ex ante ought to choose, is the option that makes the very long-run future go best. One important aspect of their argument is the claim that strong longtermism is compatible with a wide range of ethical assumptions, including plausible non-consequentialist views. In this essay, I challenge this claim. I argue that strong longtermism is incompatible with a range of non-aggregative and partially aggregative moral views. Furthermore, I argue that the conflict between these views and strong longtermism is so deep that those in favour of strong longtermism are better off arguing against them, rather than trying to modify their own view. The upshot of this discussion is that strong longtermism is not as robust to plausible variations in underlying ethical assumptions as Greaves and MacAskill claim. In particular, the stand we take on interpersonal aggregation has important implications on whether making the future go as well as possible should be a global priority.

Other working papers

Critical-set views, biographical identity, and the long term – Elliott Thornley (Global Priorities Institute, University of Oxford)

Critical-set views avoid the Repugnant Conclusion by subtracting some constant from the welfare score of each life in a population. These views are thus sensitive to facts about biographical identity: identity between lives. In this paper, I argue that questions of biographical identity give us reason to reject critical-set views and embrace the total view. I end with a practical implication. If we shift our credences towards the total view, we should also shift our efforts towards ensuring that humanity survives for the long term.

The scope of longtermism – David Thorstad (Global Priorities Institute, University of Oxford)

Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of…

Three mistakes in the moral mathematics of existential risk – David Thorstad (Global Priorities Institute, University of Oxford)

Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to…