Are we living at the hinge of history?
William MacAskill (Global Priorities Institute, Oxford University)
GPI Working Paper No. 12-2020, published in Ethics and Existence: The Legacy of Derek Parfit
In the final pages of On What Matters, Volume II, Derek Parfit comments: ‘We live during the hinge of history... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period... What now matters most is that we avoid ending human history.’ This passage echoes Parfit's comment, in Reasons and Persons, that ‘the next few centuries will be the most important in human history’.
But is the claim that we live at the hinge of history true? The argument of this paper is that it is not. The paper first suggests a way of making the hinge of history claim precise and action-relevant in the context of the question of whether altruists should try to do good now, or invest their resources in order to have more of an impact later on. Given this understanding, there are two worldviews - the Time of Perils and Value Lock-in views - on which we are indeed living during, or about to enter, the hinge of history.
This paper then presents two arguments against the hinge of history claim: first, that it is a priori extremely unlikely to be true, and that the evidence in its favour is not strong enough to overcome this a priori unlikelihood; second, an inductive argument that our ability to influence events has been increasing over time, and we should expect that trend to continue into the future. The paper concludes by considering two additional arguments in favour of the claim, and suggests that though they have some merit, they are not sufficient for us to think that the present time is the most important time in the history of civilisation.
Other working papers
Non-additive axiologies in large worlds – Christian Tarsney and Teruji Thomas (Global Priorities Institute, Oxford University)
Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say ‘yes’, but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say ‘no’…
Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)
Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…
The paralysis argument – William MacAskill, Andreas Mogensen (Global Priorities Institute, Oxford University)
Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives. We call this the Paralysis Argument. After laying out the argument, we consider and respond to…