Calibration dilemmas in the ethics of distribution

Jacob M. Nebel (University of Southern California) and H. Orri Stefánsson (Stockholm University and Swedish Collegium for Advanced Study)

GPI Working Paper No. 10-2021, published in Economics & Philosophy

This paper was the basis for the Parfit Memorial Lecture 2021.
The recording of the Parfit Memorial Lecture is now available to view here.
This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several “calibration dilemmas,” in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities—e.g., inequalities in which half the population would gain an arbitrarily large quantity of well-being or resources. We first lay out a series of such dilemmas for a family of broadly prioritarian theories. We then consider a widely endorsed family of egalitarian views and show that, despite avoiding the dilemmas for prioritarianism, they are subject to even more forceful calibration dilemmas. We then show how our results challenge common utilitarian accounts of the badness of inequalities in resources (e.g., wealth inequality). These dilemmas leave us with a few options, all of which we find unpalatable. We conclude by laying out these options and suggesting avenues for further research.

Other working papers

High risk, low reward: A challenge to the astronomical value of existential risk mitigation – David Thorstad (Global Priorities Institute, University of Oxford)

Many philosophers defend two claims: the astronomical value thesis that it is astronomically important to mitigate existential risks to humanity, and existential risk pessimism, the claim that humanity faces high levels of existential risk. It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true. Across a range of assumptions, existential risk pessimism significantly reduces the value of existential risk mitigation…

AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)

A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …