Population ethical intuitions

Lucius Caviola (Harvard University), David Althaus (Center on Long-Term Risk), Andreas Mogensen (Global Priorities Institute, University of Oxford) and Geoffrey Goodwin (University of Pennsylvania)

GPI Working Paper No. 3-2024, published in Cognition

Is humanity's existence worthwhile? If so, where should the human species be headed in the future? In part, the answers to these questions require us to morally evaluate the (potential) human population in terms of its size and aggregate welfare. This assessment lies at the heart of population ethics. Our investigation across nine experiments (N = 5776) aimed to answer three questions about how people aggregate welfare across individuals: (1) Do they weigh happiness and suffering symmetrically?; (2) Do they focus more on the average or total welfare of a given population?; and (3) Do they account only for currently existing lives, or also lives that could yet exist? We found that, first, participants believed that more happy than unhappy people were needed in order for the whole population to be net positive (Studies 1a-c). Second, participants had a preference both for populations with greater total welfare and populations with greater average welfare (Study 3a-d). Their focus on average welfare even led them (remarkably) to judge it preferable to add new suffering people to an already miserable world, as long as this increased average welfare. But, when prompted to reflect, participants' preference for the population with the better total welfare became stronger. Third, participants did not consider the creation of new people as morally neutral. Instead, they viewed it as good to create new happy people and as bad to create new unhappy people (Studies 2a-b). Our findings have implications for moral psychology, philosophy and global priority setting.

Other working papers

Strong longtermism and the challenge from anti-aggregative moral views – Karri Heikkinen (University College London)

Greaves and MacAskill (2019) argue for strong longtermism, according to which, in a wide class of decision situations, the option that is ex ante best, and the one we ex ante ought to choose, is the option that makes the very long-run future go best. One important aspect of their argument is the claim that strong longtermism is compatible with a wide range of ethical assumptions, including plausible non-consequentialist views. In this essay, I challenge this claim…

Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)

Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…

How important is the end of humanity? Lay people prioritize extinction prevention but not above all other societal issues. – Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford) et al.

Human extinction would mean the deaths of eight billion people and the end of humanity’s achievements, culture, and future potential. On several ethical views, extinction would be a terrible outcome. How do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across six empirical studies (N = 2,541; U.S. and China) we find that people consider extinction prevention a global priority and deserving of greatly increased societal resources. …