Are we living at the hinge of history?
William MacAskill (Global Priorities Institute, Oxford University)
GPI Working Paper No. 12-2020, published in Ethics and Existence: The Legacy of Derek Parfit
In the final pages of On What Matters, Volume II, Derek Parfit comments: ‘We live during the hinge of history... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period... What now matters most is that we avoid ending human history.’ This passage echoes Parfit's comment, in Reasons and Persons, that ‘the next few centuries will be the most important in human history’.
But is the claim that we live at the hinge of history true? The argument of this paper is that it is not. The paper first suggests a way of making the hinge of history claim precise and action-relevant in the context of the question of whether altruists should try to do good now, or invest their resources in order to have more of an impact later on. Given this understanding, there are two worldviews - the Time of Perils and Value Lock-in views - on which we are indeed living during, or about to enter, the hinge of history.
This paper then presents two arguments against the hinge of history claim: first, that it is a priori extremely unlikely to be true, and that the evidence in its favour is not strong enough to overcome this a priori unlikelihood; second, an inductive argument that our ability to influence events has been increasing over time, and we should expect that trend to continue into the future. The paper concludes by considering two additional arguments in favour of the claim, and suggests that though they have some merit, they are not sufficient for us to think that the present time is the most important time in the history of civilisation.
Other working papers
How to neglect the long term – Hayden Wilkinson (Global Priorities Institute, University of Oxford)
Consider longtermism: the view that, at least in some of the most important decisions facing agents today, which options are morally best is determined by which are best for the long-term future. Various critics have argued that longtermism is false—indeed, that it is obviously false, and that we can reject it on normative grounds without close consideration of certain descriptive facts. In effect, it is argued, longtermism would be false even if real-world agents had promising means…
Tiny probabilities and the value of the far future – Petra Kosonen (Population Wellbeing Initiative, University of Texas at Austin)
Morally speaking, what matters the most is the far future – at least according to Longtermism. The reason why the far future is of utmost importance is that our acts’ expected influence on the value of the world is mainly determined by their consequences in the far future. The case for Longtermism is straightforward: Given the enormous number of people who might exist in the far future, even a tiny probability of affecting how the far future goes outweighs the importance of our acts’ consequences…
Towards shutdownable agents via stochastic choice – Elliott Thornley (Global Priorities Institute, University of Oxford), Alexander Roman (New College of Florida), Christos Ziakas (Independent), Leyton Ho (Brown University), and Louis Thomson (University of Oxford)
Some worry that advanced artificial agents may resist being shut down. The Incomplete Preferences Proposal (IPP) is an idea for ensuring that doesn’t happen. A key part of the IPP is using a novel ‘Discounted REward for Same-Length Trajectories (DREST)’ reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be ‘USEFUL’), and (2) choose stochastically between different trajectory-lengths (be ‘NEUTRAL’ about trajectory-lengths). In this paper, we propose evaluation metrics…