Calibration dilemmas in the ethics of distribution

Jacob M. Nebel (University of Southern California) and H. Orri Stefánsson (Stockholm University and Swedish Collegium for Advanced Study)

GPI Working Paper No. 10-2021, published in Economics & Philosophy

This paper was the basis for the Parfit Memorial Lecture 2021.
The recording of the Parfit Memorial Lecture is now available to view here.
This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several “calibration dilemmas,” in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities—e.g., inequalities in which half the population would gain an arbitrarily large quantity of well-being or resources. We first lay out a series of such dilemmas for a family of broadly prioritarian theories. We then consider a widely endorsed family of egalitarian views and show that, despite avoiding the dilemmas for prioritarianism, they are subject to even more forceful calibration dilemmas. We then show how our results challenge common utilitarian accounts of the badness of inequalities in resources (e.g., wealth inequality). These dilemmas leave us with a few options, all of which we find unpalatable. We conclude by laying out these options and suggesting avenues for further research.

Other working papers

Critical-set views, biographical identity, and the long term – Elliott Thornley (Global Priorities Institute, University of Oxford)

Critical-set views avoid the Repugnant Conclusion by subtracting some constant from the welfare score of each life in a population. These views are thus sensitive to facts about biographical identity: identity between lives. In this paper, I argue that questions of biographical identity give us reason to reject critical-set views and embrace the total view. I end with a practical implication. If we shift our credences towards the total view, we should also shift our efforts towards ensuring that humanity survives for the long term.

Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)

Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…

Tiny probabilities and the value of the far future – Petra Kosonen (Population Wellbeing Initiative, University of Texas at Austin)

Morally speaking, what matters the most is the far future – at least according to Longtermism. The reason why the far future is of utmost importance is that our acts’ expected influence on the value of the world is mainly determined by their consequences in the far future. The case for Longtermism is straightforward: Given the enormous number of people who might exist in the far future, even a tiny probability of affecting how the far future goes outweighs the importance of our acts’ consequences…