Are we living at the hinge of history?
William MacAskill (Global Priorities Institute, Oxford University)
GPI Working Paper No. 12-2020, published in Ethics and Existence: The Legacy of Derek Parfit
In the final pages of On What Matters, Volume II, Derek Parfit comments: ‘We live during the hinge of history... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period... What now matters most is that we avoid ending human history.’ This passage echoes Parfit's comment, in Reasons and Persons, that ‘the next few centuries will be the most important in human history’.
But is the claim that we live at the hinge of history true? The argument of this paper is that it is not. The paper first suggests a way of making the hinge of history claim precise and action-relevant in the context of the question of whether altruists should try to do good now, or invest their resources in order to have more of an impact later on. Given this understanding, there are two worldviews - the Time of Perils and Value Lock-in views - on which we are indeed living during, or about to enter, the hinge of history.
This paper then presents two arguments against the hinge of history claim: first, that it is a priori extremely unlikely to be true, and that the evidence in its favour is not strong enough to overcome this a priori unlikelihood; second, an inductive argument that our ability to influence events has been increasing over time, and we should expect that trend to continue into the future. The paper concludes by considering two additional arguments in favour of the claim, and suggests that though they have some merit, they are not sufficient for us to think that the present time is the most important time in the history of civilisation.
Other working papers
AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)
Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …
How effective is (more) money? Randomizing unconditional cash transfer amounts in the US – Ania Jaroszewicz (University of California San Diego), Oliver P. Hauser (University of Exeter), Jon M. Jachimowicz (Harvard Business School) and Julian Jamison (University of Oxford and University of Exeter)
We randomized 5,243 Americans in poverty to receive a one-time unconditional cash transfer (UCT) of $2,000 (two months’ worth of total household income for the median participant), $500 (half a month’s income), or nothing. We measured the effects of the UCTs on participants’ financial well-being, psychological well-being, cognitive capacity, and physical health through surveys administered one week, six weeks, and 15 weeks later. While bank data show that both UCTs increased expenditures, we find no evidence that…
In search of a biological crux for AI consciousness – Bradford Saad (Global Priorities Institute, University of Oxford)
Whether AI systems could be conscious is often thought to turn on whether consciousness is closely linked to biology. The rough thought is that if consciousness is closely linked to biology, then AI consciousness is impossible, and if consciousness is not closely linked to biology, then AI consciousness is possible—or, at any rate, it’s more likely to be possible. A clearer specification of the kind of link between consciousness and biology that is crucial for the possibility of AI consciousness would help organize inquiry into…