Longtermism, aggregation, and catastrophic risk

Emma J. Curran (University of Cambridge)

GPI Working Paper No. 18-2022

Advocates of longtermism point out that interventions which focus on improving the prospects of people in the very far future will, in expectation, bring about a significant amount of good. Indeed, in expectation, such long-term interventions bring about far more good than their short-term counterparts. As such, longtermists claim we have compelling moral reason to prefer long-term interventions. In this paper, I show that longtermism is in conflict with plausible deontic scepticism about aggregation. I do so by demonstrating that, from both an ex-ante and ex-post perspective, longtermist interventions – and, in particular, those which aim to mitigate catastrophic risk – typically generate extremely weak claims of assistance from future people.

Other working papers

Concepts of existential catastrophe – Hilary Greaves (University of Oxford)

The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential…

The end of economic growth? Unintended consequences of a declining population – Charles I. Jones (Stanford University)

In many models, economic growth is driven by people discovering new ideas. These models typically assume either a constant or growing population. However, in high income countries today, fertility is already below its replacement rate: women are having fewer than two children on average. It is a distinct possibility — highlighted in the recent book, Empty Planet — that global population will decline rather than stabilize in the long run. …

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…