Population ethics with thresholds
Walter Bossert (University of Montreal), Susumu Cato (University of Tokyo) and Kohei Kamaga (Sophia University)
GPI Working Paper No. 3-2025
We propose a new class of social quasi-orderings in a variable-population setting. In order to declare one utility distribution at least as good as another, the critical-level utilitarian value of the former must reach or surpass the value of the latter. For each possible absolute value of the difference between the population sizes of two distributions to be compared, we specify a non-negative threshold level and a threshold inequality. This inequality indicates whether the corresponding threshold level must be reached or surpassed in the requisite comparison. All of these threshold critical-level utilitarian quasi-orderings perform same-number comparisons by means of the utilitarian criterion. In addition to this entire class of quasi-orderings, we axiomatize two important subclasses. The members of the first subclass are associated with proportional threshold functions, and the well-known critical-band utilitarian quasi-orderings are included in this subclass. The quasi-orderings in the second subclass employ constant threshold functions; the members of this second class have, to the best of our knowledge, not been examined so far. Furthermore, we characterize the members of our class that (i) avoid the repugnant conclusion; (ii) avoid the sadistic conclusions; and (iii) respect the mere-addition principle.
Other working papers
Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)
Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…
Imperfect Recall and AI Delegation – Eric Olav Chen (Global Priorities Institute, University of Oxford), Alexis Ghersengorin (Global Priorities Institute, University of Oxford) and Sami Petersen (Department of Economics, University of Oxford)
A principal wants to deploy an artificial intelligence (AI) system to perform some task. But the AI may be misaligned and aim to pursue a conflicting objective. The principal cannot restrict its options or deliver punishments. Instead, the principal is endowed with the ability to impose imperfect recall on the agent. The principal can then simulate the task and obscure whether it is real or part of a test. This allows the principal to screen misaligned AIs during testing and discipline their behaviour in deployment. By increasing the…
Tough enough? Robust satisficing as a decision norm for long-term policy analysis – Andreas Mogensen and David Thorstad (Global Priorities Institute, Oxford University)
This paper aims to open a dialogue between philosophers working in decision theory and operations researchers and engineers whose research addresses the topic of decision making under deep uncertainty. Specifically, we assess the recommendation to follow a norm of robust satisficing when making decisions under deep uncertainty in the context of decision analyses that rely on the tools of Robust Decision Making developed by Robert Lempert and colleagues at RAND …