Staking our future: deontic long-termism and the non-identity problem

Andreas Mogensen (Global Priorities Institute, Oxford University)

GPI Working Paper - No. 9-2019

Greaves and MacAskill argue for ​axiological longtermism​, according to which, in a wide class of decision  contexts, the option that is ​ex  ante best is the option that corresponds to the best lottery over histories from ​t onwards, where ​t ​is some date far in the future. They suggest that a ​stakes-sensitivity argument may be used to derive ​deontic longtermism from axiological longtermism, where deontic longtermism holds that in a wide class of decision contexts, the option one ought to choose is the option that corresponds to the best lottery over histories from ​t onwards, where ​t is some date far in the future. This argument appeals to the ​Stakes Principle​: when the axiological stakes are high, non-consequentialist constraints and prerogatives tend to be insignificant in comparison, so that what one ought to do is simply whichever option is best. I argue that there are strong grounds on which to reject the ​Stakes Principle​. Furthermore, by reflecting on the Non-Identity Problem, I argue that there are plausible grounds for denying the existence of a sound argument from axiological longtermism to deontic longtermism insofar as we are concerned with ways of improving the value of the future of the kind that are focal in Greaves and MacAskill’s presentation.

Other papers

What power-seeking theorems do not show – David Thorstad (Vanderbilt University)

Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.

Egyptology and Fanaticism – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Various decision theories share a troubling implication. They imply that, for any finite amount of value, it would be better to wager it all for a vanishingly small probability of some greater value. Counterintuitive as it might be, this fanaticism has seemingly compelling independent arguments in its favour. In this paper, I consider perhaps the most prima facie compelling such argument: an Egyptology argument (an analogue of the Egyptology argument from population ethics). …

The epistemic challenge to longtermism – Christian Tarsney (Global Priorities Institute, Oxford University)

Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism…