The Significance, Persistence, Contingency Framework
William MacAskill, Teruji Thomas (Global Priorities Institute, University of Oxford) and Aron Vallinder (Forethought Foundation for Global Priorities Institute)
GPI Technical Report No. T1-2022
The world, considered from beginning to end, combines many different features, or states of affairs, that contribute to its value. The value of each feature can be factored into its significance—its average value per unit time—and its persistence—how long it lasts. Sometimes, though, we want to ask a further question: how much of the feature’s value can be attributed to a particular agent’s decision at a particular point in time (or to some other originating event)? In other words, to what extent is the feature’s value contingent on the agent’s choice? For this, we must also look at the counterfactual: how would things have turned out otherwise?
Other working papers
Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)
The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …
The case for strong longtermism – Hilary Greaves and William MacAskill (Global Priorities Institute, University of Oxford)
A striking fact about the history of civilisation is just how early we are in it. There are 5000 years of recorded history behind us, but how many years are still to come? If we merely last as long as the typical mammalian species…
Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory – Andreas Mogensen (Global Priorities Institute, University of Oxford)
Many think that human extinction would be a catastrophic tragedy, and that we ought to do more to reduce extinction risk. There is less agreement on exactly why. If some catastrophe were to kill everyone, that would obviously be horrific. Still, many think the deaths of billions of people don’t exhaust what would be so terrible about extinction. After all, we can be confident that billions of people are going to die – many horribly and before their time – if humanity does not go extinct. …