Strong longtermism and the challenge from anti-aggregative moral views
Karri Heikkinen (University College London)
GPI Working Paper No. 5 - 2022
Greaves and MacAskill (2019) argue for strong longtermism, according to which, in a wide class of decision situations, the option that is ex ante best, and the one we ex ante ought to choose, is the option that makes the very long-run future go best. One important aspect of their argument is the claim that strong longtermism is compatible with a wide range of ethical assumptions, including plausible non-consequentialist views. In this essay, I challenge this claim. I argue that strong longtermism is incompatible with a range of non-aggregative and partially aggregative moral views. Furthermore, I argue that the conflict between these views and strong longtermism is so deep that those in favour of strong longtermism are better off arguing against them, rather than trying to modify their own view. The upshot of this discussion is that strong longtermism is not as robust to plausible variations in underlying ethical assumptions as Greaves and MacAskill claim. In particular, the stand we take on interpersonal aggregation has important implications on whether making the future go as well as possible should be a global priority.
Other working papers
How much should governments pay to prevent catastrophes? Longtermism’s limited role – Carl Shulman (Advisor, Open Philanthropy) and Elliott Thornley (Global Priorities Institute, University of Oxford)
Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. …
On two arguments for Fanaticism – Jeffrey Sanford Russell (University of Southern California)
Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better.
Moral demands and the far future – Andreas Mogensen (Global Priorities Institute, Oxford University)
I argue that moral philosophers have either misunderstood the problem of moral demandingness or at least failed to recognize important dimensions of the problem that undermine many standard assumptions. It has been assumed that utilitarianism concretely directs us to maximize welfare within a generation by transferring resources to people currently living in extreme poverty. In fact, utilitarianism seems to imply that any obligation to help people who are currently badly off is trumped by obligations to undertake actions targeted at improving the value…