The structure of critical sets

Walter Bossert (University of Montreal), Susumu Cato (University of Tokyo) and Kohei Kamaga (Sophia University)

GPI Working Paper No. 4-2025

The purpose of this paper is to address some ambiguities and misunderstandings that appear in previous studies of population ethics. In particular, we examine the structure of intervals that are employed in assessing the value of adding people to an existing population. Our focus is on critical-band utilitarianism and critical-range utilitarianism, which are commonly-used population theories that employ intervals, and we show that some previously assumed equivalences are not true in general. The possible discrepancies can be attributed to the observation that critical bands need not be equal to critical sets. The critical set for a moral quasi-ordering is composed of all utility numbers such that adding someone with a utility level in this set leads to a distribution that is not comparable to the original (non-augmented) distribution. The only case in which critical bands and critical sets coincide obtains when the critical band is an open interval. In this respect, there is a stark contrast between critical-band utilitarianism and critical-range utilitarianism: the critical set that corresponds to a critical-range quasi-ordering always coincides with the interval that is used to define the requisite quasi-ordering. As a consequence, an often presumed equivalence of critical-band utilitarianism and critical-range utilitarianism is not valid unless, again, the critical band and the critical range (and, consequently, the requisite critical sets) are given by the same open interval.

Other working papers

Shutdownable Agents through POST-Agency – Elliott Thornley (Global Priorities Institute, University of Oxford)

Many fear that future artificial agents will resist shutdown. I present an idea – the POST-Agents Proposal – for ensuring that doesn’t happen. I propose that we train agents to satisfy Preferences Only Between Same-Length Trajectories (POST). I then prove that POST – together with other conditions – implies Neutrality+: the agent maximizes expected utility, ignoring the probability distribution over trajectory-lengths. I argue that Neutrality+ keeps agents shutdownable and allows them to be useful.

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.

A Fission Problem for Person-Affecting Views – Elliott Thornley (Global Priorities Institute, University of Oxford)

On person-affecting views in population ethics, the moral import of a person’s welfare depends on that person’s temporal or modal status. These views typically imply that – all else equal – we’re never required to create extra people, or to act in ways that increase the probability of extra people coming into existence. In this paper, I use Parfit-style fission cases to construct a dilemma for person-affecting views: either they forfeit their seeming-advantages and face fission analogues…