The unexpected value of the future 

Hayden Wilkinson (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 17-2022

Consider longtermism: the view that the morally best options available to us, in many important practical decisions, are those that provide the greatest improvements in the (ex ante) value of the far future. Many who accept longtermism do so because they accept an impartial, aggregative theory of moral betterness in conjunction with expected value theory. But such a combination of views implies absurdity if the (impartial, aggregated) value of humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution. In this paper, I argue that our evidence requires us to adopt such a probability distribution—indeed, a distribution that cannot be evaluated even by the extensions of expected value theory that have so far been proposed. I propose a new method of extending expected value theory, which allows us to deal with this distribution and to salvage the case for longtermism. I also consider how risk-averse decision theories might deal with such a case, and offer a surprising argument in favour of risk aversion in moral decision-making.

Other working papers

Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory – Andreas Mogensen (Global Priorities Institute, University of Oxford)

Many think that human extinction would be a catastrophic tragedy, and that we ought to do more to reduce extinction risk. There is less agreement on exactly why. If some catastrophe were to kill everyone, that would obviously be horrific. Still, many think the deaths of billions of people don’t exhaust what would be so terrible about extinction. After all, we can be confident that billions of people are going to die – many horribly and before their time – if humanity does not go extinct. …

Simulation expectation – Teruji Thomas (Global Priorities Institute, University of Oxford)

I present a new argument for the claim that I’m much more likely to be a person living in a computer simulation than a person living in the ground-level of reality. I consider whether this argument can be blocked by an externalist view of what my evidence supports, and I urge caution against the easy assumption that actually finding lots of simulations would increase the odds that I myself am in one.

How important is the end of humanity? Lay people prioritize extinction prevention but not above all other societal issues. – Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford) et al.

Human extinction would mean the deaths of eight billion people and the end of humanity’s achievements, culture, and future potential. On several ethical views, extinction would be a terrible outcome. How do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across six empirical studies (N = 2,541; U.S. and China) we find that people consider extinction prevention a global priority and deserving of greatly increased societal resources. …