The structure of critical sets
Walter Bossert (University of Montreal), Susumu Cato (University of Tokyo) and Kohei Kamaga (Sophia University)
GPI Working Paper No. 4-2025
The purpose of this paper is to address some ambiguities and misunderstandings that appear in previous studies of population ethics. In particular, we examine the structure of intervals that are employed in assessing the value of adding people to an existing population. Our focus is on critical-band utilitarianism and critical-range utilitarianism, which are commonly-used population theories that employ intervals, and we show that some previously assumed equivalences are not true in general. The possible discrepancies can be attributed to the observation that critical bands need not be equal to critical sets. The critical set for a moral quasi-ordering is composed of all utility numbers such that adding someone with a utility level in this set leads to a distribution that is not comparable to the original (non-augmented) distribution. The only case in which critical bands and critical sets coincide obtains when the critical band is an open interval. In this respect, there is a stark contrast between critical-band utilitarianism and critical-range utilitarianism: the critical set that corresponds to a critical-range quasi-ordering always coincides with the interval that is used to define the requisite quasi-ordering. As a consequence, an often presumed equivalence of critical-band utilitarianism and critical-range utilitarianism is not valid unless, again, the critical band and the critical range (and, consequently, the requisite critical sets) are given by the same open interval.
Other working papers
When should an effective altruist donate? – William MacAskill (Global Priorities Institute, Oxford University)
Effective altruism is the use of evidence and careful reasoning to work out how to maximize positive impact on others with a given unit of resources, and the taking of action on that basis. It’s a philosophy and a social movement that is gaining considerable steam in the philanthropic world. For example,…
AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…
Minimal and Expansive Longtermism – Hilary Greaves (University of Oxford) and Christian Tarsney (Population Wellbeing Initiative, University of Texas at Austin)
The standard case for longtermism focuses on a small set of risks to the far future, and argues that in a small set of choice situations, the present marginal value of mitigating those risks is very great. But many longtermists are attracted to, and many critics of longtermism worried by, a farther-reaching form of longtermism. According to this farther-reaching form, there are many ways of improving the far future, which determine the value of our options in all or nearly all choice situations…