The scope of longtermism

David Thorstad (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 6-2021

Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of cause-neutral philanthropic decisionmaking, it is increasingly suggested that longtermism holds in many or most decision problems that humans face. By contrast, I suggest that the scope of longtermism may be more restricted than commonly supposed. After specifying my target, swamping axiological strong longtermism (swamping ASL), I give two arguments for the rarity thesis that the options needed to vindicate swamping ASL in a given decision problem are rare. I use the rarity thesis to pose two challenges to the scope of longtermism: the area challenge that swamping ASL often fails when we restrict our attention to specific cause areas, and the challenge from option unawareness that swamping ASL may fail when decision problems are modified to incorporate agents’ limited awareness of the options available to them.

Other working papers

Measuring AI-Driven Risk with Stock Prices – Susana Campos-Martins (Global Priorities Institute, University of Oxford)

We propose an empirical approach to identify and measure AI-driven shocks based on the co-movements of relevant financial asset prices. For that purpose, we first calculate the common volatility of the share prices of major US AI-relevant companies. Then we isolate the events that shake this industry only from those that shake all sectors of economic activity at the same time. For the sample analysed, AI shocks are identified when there are announcements about (mergers and) acquisitions in the AI industry, launching of…

Heuristics for clueless agents: how to get away with ignoring what matters most in ordinary decision-making – David Thorstad and Andreas Mogensen (Global Priorities Institute, Oxford University)

Even our most mundane decisions have the potential to significantly impact the long-term future, but we are often clueless about what this impact may be. In this paper, we aim to characterize and solve two problems raised by recent discussions of cluelessness, which we term the Problems of Decision Paralysis and the Problem of Decision-Making Demandingness. After reviewing and rejecting existing solutions to both problems, we argue that the way forward is to be found in the distinction between procedural and substantive rationality…

Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)

This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…