The unexpected value of the future 

Hayden Wilkinson (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 17-2022, forthcoming in Ergo

Various philosophers accept moral views that are impartial, additive, and risk-neutral with respect to betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire reductio ad absurdum. If the expected sum of value in humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution—then those views say that no real-world option is ever better than any other. And, as I argue, our evidence plausibly supports such a probability distribution. Indeed, it supports a probability distribution that cannot be evaluated even if we extend expected value theory according to one of several extensions proposed in the literature. Must we therefore reject all impartial, additive, risk-neutral moral theories? It turns out that we need not. I provide a potential solution: by adopting a strong enough extension of expected value theory, we can evaluate that problematic distribution and potentially salvage those moral views.

Other working papers

The long-run relationship between per capita incomes and population size – Maya Eden (University of Zurich) and Kevin Kuruc (Population Wellbeing Initiative, University of Texas at Austin)

The relationship between the human population size and per capita incomes has long been debated. Two competing forces feature prominently in these discussions. On the one hand, a larger population means that limited natural resources must be shared among more people. On the other hand, more people means more innovation and faster technological progress, other things equal. We study a model that features both of these channels. A calibration suggests that, in the long run, (marginal) increases in population would…

Against Anti-Fanaticism – Christian Tarsney (Population Wellbeing Initiative, University of Texas at Austin)

Should you be willing to forego any sure good for a tiny probability of a vastly greater good? Fanatics say you should, anti-fanatics say you should not. Anti-fanaticism has great intuitive appeal. But, I argue, these intuitions are untenable, because satisfying them in their full generality is incompatible with three very plausible principles: acyclicity, a minimal dominance principle, and the principle that any outcome can be made better or worse. This argument against anti-fanaticism can be…

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.