Population ethics with thresholds

Walter Bossert (University of Montreal), Susumu Cato (University of Tokyo) and Kohei Kamaga (Sophia University)

GPI Working Paper No. 3-2025

We propose a new class of social quasi-orderings in a variable-population setting. In order to declare one utility distribution at least as good as another, the critical-level utilitarian value of the former must reach or surpass the value of the latter. For each possible absolute value of the difference between the population sizes of two distributions to be compared, we specify a non-negative threshold level and a threshold inequality. This inequality indicates whether the corresponding threshold level must be reached or surpassed in the requisite comparison. All of these threshold critical-level utilitarian quasi-orderings perform same-number comparisons by means of the utilitarian criterion. In addition to this entire class of quasi-orderings, we axiomatize two important subclasses. The members of the first subclass are associated with proportional threshold functions, and the well-known critical-band utilitarian quasi-orderings are included in this subclass. The quasi-orderings in the second subclass employ constant threshold functions; the members of this second class have, to the best of our knowledge, not been examined so far. Furthermore, we characterize the members of our class that (i) avoid the repugnant conclusion; (ii) avoid the sadistic conclusions; and (iii) respect the mere-addition principle.

Other working papers

Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)

Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…

How to neglect the long term – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Consider longtermism: the view that, at least in some of the most important decisions facing agents today, which options are morally best is determined by which are best for the long-term future. Various critics have argued that longtermism is false—indeed, that it is obviously false, and that we can reject it on normative grounds without close consideration of certain descriptive facts. In effect, it is argued, longtermism would be false even if real-world agents had promising means…

Existential risks from a Thomist Christian perspective – Stefan Riedener (University of Zurich)

Let’s say with Nick Bostrom that an ‘existential risk’ (or ‘x-risk’) is a risk that ‘threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development’ (2013, 15). There are a number of such risks: nuclear wars, developments in biotechnology or artificial intelligence, climate change, pandemics, supervolcanos, asteroids, and so on (see e.g. Bostrom and Ćirković 2008). …