The unexpected value of the future 

Hayden Wilkinson (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 17-2022

Consider longtermism: the view that the morally best options available to us, in many important practical decisions, are those that provide the greatest improvements in the (ex ante) value of the far future. Many who accept longtermism do so because they accept an impartial, aggregative theory of moral betterness in conjunction with expected value theory. But such a combination of views implies absurdity if the (impartial, aggregated) value of humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution. In this paper, I argue that our evidence requires us to adopt such a probability distribution—indeed, a distribution that cannot be evaluated even by the extensions of expected value theory that have so far been proposed. I propose a new method of extending expected value theory, which allows us to deal with this distribution and to salvage the case for longtermism. I also consider how risk-averse decision theories might deal with such a case, and offer a surprising argument in favour of risk aversion in moral decision-making.

Other working papers

On the desire to make a difference – Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas (Global Priorities Institute, University of Oxford)

True benevolence is, most fundamentally, a desire that the world be better. It is natural and common, however, to frame thinking about benevolence indirectly, in terms of a desire to make a difference to how good the world is. This would be an innocuous shift if desires to make a difference were extensionally equivalent to desires that the world be better. This paper shows that at least on some common ways of making a “desire to make a difference” precise, this extensional equivalence fails.

Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)

The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …

Heuristics for clueless agents: how to get away with ignoring what matters most in ordinary decision-making – David Thorstad and Andreas Mogensen (Global Priorities Institute, Oxford University)

Even our most mundane decisions have the potential to significantly impact the long-term future, but we are often clueless about what this impact may be. In this paper, we aim to characterize and solve two problems raised by recent discussions of cluelessness, which we term the Problems of Decision Paralysis and the Problem of Decision-Making Demandingness. After reviewing and rejecting existing solutions to both problems, we argue that the way forward is to be found in the distinction between procedural and substantive rationality…