Are we living at the hinge of history?

William MacAskill (Global Priorities Institute, Oxford University)

GPI Working Paper No. 12-2020, published in Ethics and Existence: The Legacy of Derek Parfit

In the final pages of On What Matters, Volume II, Derek Parfit comments: ‘We live during the hinge of history... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period... What now matters most is that we avoid ending human history.’ This passage echoes Parfit's comment, in Reasons and Persons, that ‘the next few centuries will be the most important in human history’.

But is the claim that we live at the hinge of history true? The argument of this paper is that it is not. The paper first suggests a way of making the hinge of history claim precise and action-relevant in the context of the question of whether altruists should try to do good now, or invest their resources in order to have more of an impact later on. Given this understanding, there are two worldviews - the Time of Perils and Value Lock-in views - on which we are indeed living during, or about to enter, the hinge of history.

This paper then presents two arguments against the hinge of history claim: first, that it is a priori extremely unlikely to be true, and that the evidence in its favour is not strong enough to overcome this a priori unlikelihood; second, an inductive argument that our ability to influence events has been increasing over time, and we should expect that trend to continue into the future. The paper concludes by considering two additional arguments in favour of the claim, and suggests that though they have some merit, they are not sufficient for us to think that the present time is the most important time in the history of civilisation.

Other working papers

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…

Cassandra’s Curse: A second tragedy of the commons – Philippe Colo (ETH Zurich)

This paper studies why scientific forecasts regarding exceptional or rare events generally fail to trigger adequate public response. I consider a game of contribution to a public bad. Prior to the game, I assume contributors receive non-verifiable expert advice regarding uncertain damages. In addition, I assume that the expert cares only about social welfare. Under mild assumptions, I show that no information transmission can happen at equilibrium when the number of contributors…

In defence of fanaticism – Hayden Wilkinson (Australian National University)

Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of a far better outcome (perhaps trillions of blissful lives created). Which is morally better? Expected value theory (with a plausible axiology) judges (2) as better, no matter how tiny its probability of success. But this seems fanatical. So we may be tempted to abandon expected value theory…