The structure of critical sets

Walter Bossert (University of Montreal), Susumu Cato (University of Tokyo) and Kohei Kamaga (Sophia University)

GPI Working Paper No. 4-2025

The purpose of this paper is to address some ambiguities and misunderstandings that appear in previous studies of population ethics. In particular, we examine the structure of intervals that are employed in assessing the value of adding people to an existing population. Our focus is on critical-band utilitarianism and critical-range utilitarianism, which are commonly-used population theories that employ intervals, and we show that some previously assumed equivalences are not true in general. The possible discrepancies can be attributed to the observation that critical bands need not be equal to critical sets. The critical set for a moral quasi-ordering is composed of all utility numbers such that adding someone with a utility level in this set leads to a distribution that is not comparable to the original (non-augmented) distribution. The only case in which critical bands and critical sets coincide obtains when the critical band is an open interval. In this respect, there is a stark contrast between critical-band utilitarianism and critical-range utilitarianism: the critical set that corresponds to a critical-range quasi-ordering always coincides with the interval that is used to define the requisite quasi-ordering. As a consequence, an often presumed equivalence of critical-band utilitarianism and critical-range utilitarianism is not valid unless, again, the critical band and the critical range (and, consequently, the requisite critical sets) are given by the same open interval.

Other working papers

Simulation expectation – Teruji Thomas (Global Priorities Institute, University of Oxford)

I present a new argument for the claim that I’m much more likely to be a person living in a computer simulation than a person living in the ground-level of reality. I consider whether this argument can be blocked by an externalist view of what my evidence supports, and I urge caution against the easy assumption that actually finding lots of simulations would increase the odds that I myself am in one.

In search of a biological crux for AI consciousness – Bradford Saad (Global Priorities Institute, University of Oxford)

Whether AI systems could be conscious is often thought to turn on whether consciousness is closely linked to biology. The rough thought is that if consciousness is closely linked to biology, then AI consciousness is impossible, and if consciousness is not closely linked to biology, then AI consciousness is possible—or, at any rate, it’s more likely to be possible. A clearer specification of the kind of link between consciousness and biology that is crucial for the possibility of AI consciousness would help organize inquiry into…

The epistemic challenge to longtermism – Christian Tarsney (Global Priorities Institute, Oxford University)

Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism…