How many lives does the future hold?

Toby Newberry (Future of Humanity Institute, University of Oxford)

GPI Technical Report No. T2-2021

The total number of people who have ever lived, across the entire human past, has been estimated at around 100 billion.2 The total number of people who will ever live, across the entire human future, is unknown - but not immune to the tools of rational inquiry. This report estimates the expected size of the future, as measured in units of ‘human-life-equivalents’ (henceforth: ‘lives’). The task is a daunting one, and the aim here is not to be the final word on this subject. Instead, this report aspires to two more modest aims...

Other working papers

In search of a biological crux for AI consciousness – Bradford Saad (Global Priorities Institute, University of Oxford)

Whether AI systems could be conscious is often thought to turn on whether consciousness is closely linked to biology. The rough thought is that if consciousness is closely linked to biology, then AI consciousness is impossible, and if consciousness is not closely linked to biology, then AI consciousness is possible—or, at any rate, it’s more likely to be possible. A clearer specification of the kind of link between consciousness and biology that is crucial for the possibility of AI consciousness would help organize inquiry into…

Time discounting, consistency and special obligations: a defence of Robust Temporalism – Harry R. Lloyd (Yale University)

This paper defends the claim that mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save – call this view Robust Temporalism. Although almost all other moral philosophers dismiss Robust Temporalism out of hand, I argue that it is prima facie intuitively plausible, and that it is analogous to a view about special obligations that many philosophers already accept…

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …