Aggregating Small Risks of Serious Harms
Tomi Francis (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 21-2024
According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent a small number of people from certainly suffering the same harms. Along the way, I object to the ex ante view on the grounds that it gives an implausible degree of priority to preventing identified over statistical harms, and to the ex post view on the grounds that it fails to respect the separateness of persons. An insight about the nature of claims emerges from these arguments: there are three conceptually distinct senses in which a person’s claim can be said to have a certain degree of strength. I make use of the distinction between these three senses in which a claim can be said to have strength in order to set out a new, more plausible, view about the aggregation of people’s claims under risk.
Other working papers
Doomsday rings twice – Andreas Mogensen (Global Priorities Institute, Oxford University)
This paper considers the argument according to which, because we should regard it as a priori very unlikely that we are among the most important people who will ever exist, we should increase our confidence that the human species will not persist beyond the current historical era, which seems to represent…
What power-seeking theorems do not show – David Thorstad (Vanderbilt University)
Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.
Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)
This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…