How many lives does the future hold?
Toby Newberry (Future of Humanity Institute, University of Oxford)
GPI Technical Report No. T2-2021
The total number of people who have ever lived, across the entire human past, has been estimated at around 100 billion.2 The total number of people who will ever live, across the entire human future, is unknown - but not immune to the tools of rational inquiry. This report estimates the expected size of the future, as measured in units of ‘human-life-equivalents’ (henceforth: ‘lives’). The task is a daunting one, and the aim here is not to be the final word on this subject. Instead, this report aspires to two more modest aims...
Other working papers
Meaning, medicine and merit – Andreas Mogensen (Global Priorities Institute, Oxford University)
Given the inevitability of scarcity, should public institutions ration healthcare resources so as to prioritize those who contribute more to society? Intuitively, we may feel that this would be somehow inegalitarian. I argue that the egalitarian objection to prioritizing treatment on the basis of patients’ usefulness to others is best thought…
Non-additive axiologies in large worlds – Christian Tarsney and Teruji Thomas (Global Priorities Institute, Oxford University)
Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say ‘yes’, but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say ‘no’…
Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)
Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.