The Significance, Persistence, Contingency Framework

William MacAskill, Teruji Thomas (Global Priorities Institute, University of Oxford) and Aron Vallinder (Forethought Foundation for Global Priorities Institute)

GPI Technical Report No. T1-2022

The world, considered from beginning to end, combines many different features, or states of affairs, that contribute to its value. The value of each feature can be factored into its significance—its average value per unit time—and its persistence—how long it lasts. Sometimes, though, we want to ask a further question: how much of the feature’s value can be attributed to a particular agent’s decision at a particular point in time (or to some other originating event)? In other words, to what extent is the feature’s value contingent on the agent’s choice? For this, we must also look at the counterfactual: how would things have turned out otherwise?

Other working papers

Choosing the future: Markets, ethics and rapprochement in social discounting – Antony Millner (University of California, Santa Barbara) and Geoffrey Heal (Columbia University)

This paper provides a critical review of the literature on choosing social discount rates (SDRs) for public cost-benefit analysis. We discuss two dominant approaches, the first based on market prices, and the second based on intertemporal ethics. While both methods have attractive features, neither is immune to criticism. …

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …

Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)

The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …