The unexpected value of the future
Hayden Wilkinson (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 17-2022, forthcoming in Ergo
Various philosophers accept moral views that are impartial, additive, and risk-neutral with respect to betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire reductio ad absurdum. If the expected sum of value in humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution—then those views say that no real-world option is ever better than any other. And, as I argue, our evidence plausibly supports such a probability distribution. Indeed, it supports a probability distribution that cannot be evaluated even if we extend expected value theory according to one of several extensions proposed in the literature. Must we therefore reject all impartial, additive, risk-neutral moral theories? It turns out that we need not. I provide a potential solution: by adopting a strong enough extension of expected value theory, we can evaluate that problematic distribution and potentially salvage those moral views.
Other working papers
Population ethical intuitions – Lucius Caviola (Harvard University) et al.
Is humanity’s existence worthwhile? If so, where should the human species be headed in the future? In part, the answers to these questions require us to morally evaluate the (potential) human population in terms of its size and aggregate welfare. This assessment lies at the heart of population ethics. Our investigation across nine experiments (N = 5776) aimed to answer three questions about how people aggregate welfare across individuals: (1) Do they weigh happiness and suffering symmetrically…
AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…
Cassandra’s Curse: A second tragedy of the commons – Philippe Colo (ETH Zurich)
This paper studies why scientific forecasts regarding exceptional or rare events generally fail to trigger adequate public response. I consider a game of contribution to a public bad. Prior to the game, I assume contributors receive non-verifiable expert advice regarding uncertain damages. In addition, I assume that the expert cares only about social welfare. Under mild assumptions, I show that no information transmission can happen at equilibrium when the number of contributors…