Staking our future: deontic long-termism and the non-identity problem
Andreas Mogensen (Global Priorities Institute, Oxford University)
GPI Working Paper - No. 9-2019
Greaves and MacAskill argue for axiological longtermism, according to which, in a wide class of decision contexts, the option that is ex ante best is the option that corresponds to the best lottery over histories from t onwards, where t is some date far in the future. They suggest that a stakes-sensitivity argument may be used to derive deontic longtermism from axiological longtermism, where deontic longtermism holds that in a wide class of decision contexts, the option one ought to choose is the option that corresponds to the best lottery over histories from t onwards, where t is some date far in the future. This argument appeals to the Stakes Principle: when the axiological stakes are high, non-consequentialist constraints and prerogatives tend to be insignificant in comparison, so that what one ought to do is simply whichever option is best. I argue that there are strong grounds on which to reject the Stakes Principle. Furthermore, by reflecting on the Non-Identity Problem, I argue that there are plausible grounds for denying the existence of a sound argument from axiological longtermism to deontic longtermism insofar as we are concerned with ways of improving the value of the future of the kind that are focal in Greaves and MacAskill’s presentation.
Other papers
On the desire to make a difference – Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas (Global Priorities Institute, University of Oxford)
True benevolence is, most fundamentally, a desire that the world be better. It is natural and common, however, to frame thinking about benevolence indirectly, in terms of a desire to make a difference to how good the world is. This would be an innocuous shift if desires to make a difference were extensionally equivalent to desires that the world be better. This paper shows that at least on some common ways of making a “desire to make a difference” precise, this extensional equivalence fails.
Is Existential Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models – Gustav Alexandrie (Global Priorities Institute, University of Oxford) and Maya Eden (Brandeis University)
What socially beneficial causes should philanthropists prioritize if they give equal ethical weight to the welfare of current and future generations? Many have argued that, because human extinction would result in a permanent loss of all future generations, extinction risk mitigation should be the top priority given this impartial stance. Using standard models of population dynamics, we challenge this conclusion. We first introduce a theoretical framework for quantifying undiscounted cost-effectiveness over…
In search of a biological crux for AI consciousness – Bradford Saad (Global Priorities Institute, University of Oxford)
Whether AI systems could be conscious is often thought to turn on whether consciousness is closely linked to biology. The rough thought is that if consciousness is closely linked to biology, then AI consciousness is impossible, and if consciousness is not closely linked to biology, then AI consciousness is possible—or, at any rate, it’s more likely to be possible. A clearer specification of the kind of link between consciousness and biology that is crucial for the possibility of AI consciousness would help organize inquiry into…