How many lives does the future hold?

Toby Newberry (Future of Humanity Institute, University of Oxford)

GPI Technical Report No. T2-2021

The total number of people who have ever lived, across the entire human past, has been estimated at around 100 billion.2 The total number of people who will ever live, across the entire human future, is unknown - but not immune to the tools of rational inquiry. This report estimates the expected size of the future, as measured in units of ‘human-life-equivalents’ (henceforth: ‘lives’). The task is a daunting one, and the aim here is not to be the final word on this subject. Instead, this report aspires to two more modest aims...

Other working papers

Ethical Consumerism – Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford)

I study a static production economy in which consumers have not only preferences over their own consumption but also external, or “ethical”, preferences over the supply of each good. Though existing work on the implications of external preferences assumes price-taking, I show that ethical consumers generically prefer not to act even approximately as price-takers. I therefore introduce a near-Nash equilibrium concept that generalizes the near-Nash equilibria found in literature on strategic foundations of general equilibrium…

Consciousness makes things matter – Andrew Y. Lee (University of Toronto)

This paper argues that phenomenal consciousness is what makes an entity a welfare subject, or the kind of thing that can be better or worse off. I develop and motivate this view, and then defend it from objections concerning death, non-conscious entities that have interests (such as plants), and conscious subjects that necessarily have welfare level zero. I also explain how my theory of welfare subjects relates to experientialist and anti-experientialist theories of welfare goods.

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.