Population ethical intuitions

Lucius Caviola (Harvard University), David Althaus (Center on Long-Term Risk), Andreas Mogensen (Global Priorities Institute, University of Oxford) and Geoffrey Goodwin (University of Pennsylvania)

GPI Working Paper No. 3-2024, published in Cognition

Is humanity's existence worthwhile? If so, where should the human species be headed in the future? In part, the answers to these questions require us to morally evaluate the (potential) human population in terms of its size and aggregate welfare. This assessment lies at the heart of population ethics. Our investigation across nine experiments (N = 5776) aimed to answer three questions about how people aggregate welfare across individuals: (1) Do they weigh happiness and suffering symmetrically?; (2) Do they focus more on the average or total welfare of a given population?; and (3) Do they account only for currently existing lives, or also lives that could yet exist? We found that, first, participants believed that more happy than unhappy people were needed in order for the whole population to be net positive (Studies 1a-c). Second, participants had a preference both for populations with greater total welfare and populations with greater average welfare (Study 3a-d). Their focus on average welfare even led them (remarkably) to judge it preferable to add new suffering people to an already miserable world, as long as this increased average welfare. But, when prompted to reflect, participants' preference for the population with the better total welfare became stronger. Third, participants did not consider the creation of new people as morally neutral. Instead, they viewed it as good to create new happy people and as bad to create new unhappy people (Studies 2a-b). Our findings have implications for moral psychology, philosophy and global priority setting.

Other working papers

Towards shutdownable agents via stochastic choice – Elliott Thornley (Global Priorities Institute, University of Oxford), Alexander Roman (New College of Florida), Christos Ziakas (Independent), Leyton Ho (Brown University), and Louis Thomson (University of Oxford)

Some worry that advanced artificial agents may resist being shut down. The Incomplete Preferences Proposal (IPP) is an idea for ensuring that does not happen. A key part of the IPP is using a novel ‘Discounted Reward for Same-Length Trajectories (DReST)’ reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be ‘USEFUL’), and (2) choose stochastically between different trajectory-lengths (be ‘NEUTRAL’ about trajectory-lengths). In this paper, we propose…

On two arguments for Fanaticism – Jeffrey Sanford Russell (University of Southern California)

Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better.

The freedom of future people – Andreas T Schmidt (University of Groningen)

What happens to liberal political philosophy, if we consider not only the freedom of present but also future people? In this article, I explore the case for long-term liberalism: freedom should be a central goal, and we should often be particularly concerned with effects on long-term future distributions of freedom. I provide three arguments. First, liberals should be long-term liberals: liberal arguments to value freedom give us reason to be (particularly) concerned with future freedom…