The Asymmetry, Uncertainty, and the Long Term
Teruji Thomas (Global Priorities Institute, Oxford University)
GPI Working Paper No. 11-2019, published in Philosophy and Phenomenological Research
The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing the many difficulties that arise in this area, I present general ‘supervenience principles’ that reduce arbitrary choices to uncertainty-free ones. In that sense they provide a method for aggregating across states of nature. But they also reduce arbitrary choices to one-person cases, and in that sense provide a method for aggregating across people. The principles are general in that they are compatible with total utilitarianism and ex post prioritarianism in fixed-population cases, and with a wide range of ways of extending these views to variable-population cases. I then illustrate these principles by writing down a complete theory of the Asymmetry, or rather several such theories to reflect some of the main substantive choice-points. In doing so I suggest a new way to deal with the intransitivity of the relation ‘ought to choose A over B’. Finally, I consider what these views have to say about the importance of extinction risk and the long-run future.
Please note that this working paper contains some additional material about cyclic choice and also about ʽhardʼ versions of the asymmetry, according to which harms to independently existing people cannot be justified by the creation of good lives. But for other material, please refer to and cite the published version in Philosophy and Phenomelogical Research.
Other working papers
Staking our future: deontic long-termism and the non-identity problem – Andreas Mogensen (Global Priorities Institute, Oxford University)
Greaves and MacAskill argue for axiological longtermism, according to which, in a wide class of decision contexts, the option that is ex ante best is the option that corresponds to the best lottery over histories from t onwards, where t is some date far in the future. They suggest that a stakes-sensitivity argument…
The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists – Elliott Thornley (Global Priorities Institute, University of Oxford)
I explain and motivate the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems suggest that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. I end by noting that…
Doomsday rings twice – Andreas Mogensen (Global Priorities Institute, Oxford University)
This paper considers the argument according to which, because we should regard it as a priori very unlikely that we are among the most important people who will ever exist, we should increase our confidence that the human species will not persist beyond the current historical era, which seems to represent…