The structure of critical sets

Walter Bossert (University of Montreal), Susumu Cato (University of Tokyo) and Kohei Kamaga (Sophia University)

GPI Working Paper No. 4-2025

The purpose of this paper is to address some ambiguities and misunderstandings that appear in previous studies of population ethics. In particular, we examine the structure of intervals that are employed in assessing the value of adding people to an existing population. Our focus is on critical-band utilitarianism and critical-range utilitarianism, which are commonly-used population theories that employ intervals, and we show that some previously assumed equivalences are not true in general. The possible discrepancies can be attributed to the observation that critical bands need not be equal to critical sets. The critical set for a moral quasi-ordering is composed of all utility numbers such that adding someone with a utility level in this set leads to a distribution that is not comparable to the original (non-augmented) distribution. The only case in which critical bands and critical sets coincide obtains when the critical band is an open interval. In this respect, there is a stark contrast between critical-band utilitarianism and critical-range utilitarianism: the critical set that corresponds to a critical-range quasi-ordering always coincides with the interval that is used to define the requisite quasi-ordering. As a consequence, an often presumed equivalence of critical-band utilitarianism and critical-range utilitarianism is not valid unless, again, the critical band and the critical range (and, consequently, the requisite critical sets) are given by the same open interval.

Other working papers

How much should governments pay to prevent catastrophes? Longtermism’s limited role – Carl Shulman (Advisor, Open Philanthropy) and Elliott Thornley (Global Priorities Institute, University of Oxford)

Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. …

Is Existential Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models – Gustav Alexandrie (Global Priorities Institute, University of Oxford) and Maya Eden (Brandeis University)

What socially beneficial causes should philanthropists prioritize if they give equal ethical weight to the welfare of current and future generations? Many have argued that, because human extinction would result in a permanent loss of all future generations, extinction risk mitigation should be the top priority given this impartial stance. Using standard models of population dynamics, we challenge this conclusion. We first introduce a theoretical framework for quantifying undiscounted cost-effectiveness over…

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…