High risk, low reward: A challenge to the astronomical value of existential risk mitigation

David Thorstad (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 6-2023, published in Philosophy and Public Affairs

Many philosophers defend two claims: the astronomical value thesis that it is astronomically important to mitigate existential risks to humanity, and existential risk pessimism, the claim that humanity faces high levels of existential risk. It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true. Across a range of assumptions, existential risk pessimism significantly reduces the value of existential risk mitigation, so much so that pessimism threatens to falsify the astronomical value thesis. I argue that the best way to reconcile existential risk pessimism with the astronomical value thesis relies on a questionable empirical assumption. I conclude by drawing out philosophical implications of this discussion, including a transformed understanding of the demandingness objection to consequentialism, reduced prospects for ethical longtermism, and a diminished moral importance of existential risk mitigation.

Other working papers

AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)

A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…

On two arguments for Fanaticism – Jeffrey Sanford Russell (University of Southern California)

Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better.

Staking our future: deontic long-termism and the non-identity problem – Andreas Mogensen (Global Priorities Institute, Oxford University)

Greaves and MacAskill argue for axiological longtermism, according to which, in a wide class of decision contexts, the option that is ex ante best is the option that corresponds to the best lottery over histories from t onwards, where t is some date far in the future. They suggest that a stakes-sensitivity argument…