Are we living at the hinge of history?
William MacAskill (Global Priorities Institute, Oxford University)
GPI Working Paper No. 12-2020, published in Ethics and Existence: The Legacy of Derek Parfit
In the final pages of On What Matters, Volume II, Derek Parfit comments: ‘We live during the hinge of history... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period... What now matters most is that we avoid ending human history.’ This passage echoes Parfit's comment, in Reasons and Persons, that ‘the next few centuries will be the most important in human history’.
But is the claim that we live at the hinge of history true? The argument of this paper is that it is not. The paper first suggests a way of making the hinge of history claim precise and action-relevant in the context of the question of whether altruists should try to do good now, or invest their resources in order to have more of an impact later on. Given this understanding, there are two worldviews - the Time of Perils and Value Lock-in views - on which we are indeed living during, or about to enter, the hinge of history.
This paper then presents two arguments against the hinge of history claim: first, that it is a priori extremely unlikely to be true, and that the evidence in its favour is not strong enough to overcome this a priori unlikelihood; second, an inductive argument that our ability to influence events has been increasing over time, and we should expect that trend to continue into the future. The paper concludes by considering two additional arguments in favour of the claim, and suggests that though they have some merit, they are not sufficient for us to think that the present time is the most important time in the history of civilisation.
Other working papers
Longtermism in an Infinite World – Christian J. Tarsney (Population Wellbeing Initiative, University of Texas at Austin) and Hayden Wilkinson (Global Priorities Institute, University of Oxford)
The case for longtermism depends on the vast potential scale of the future. But that same vastness also threatens to undermine the case for longtermism: If the future contains infinite value, then many theories of value that support longtermism (e.g., risk-neutral total utilitarianism) seem to imply that no available action is better than any other. And some strategies for avoiding this conclusion (e.g., exponential time discounting) yield views that…
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…
Meaning, medicine and merit – Andreas Mogensen (Global Priorities Institute, Oxford University)
Given the inevitability of scarcity, should public institutions ration healthcare resources so as to prioritize those who contribute more to society? Intuitively, we may feel that this would be somehow inegalitarian. I argue that the egalitarian objection to prioritizing treatment on the basis of patients’ usefulness to others is best thought…