How to neglect the long term

Hayden Wilkinson (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 11-2023

Consider longtermism: the view that, at least in some of the most important decisions facing agents today, which options are morally best is determined by which are best for the long-term future. Various critics have argued that longtermism is false—indeed, that it is obviously false, and that we can reject it on normative grounds without close consideration of certain descriptive facts. In effect, it is argued, longtermism would be false even if real-world agents had promising means of benefiting vast numbers of future people. In this paper, I develop a series of troubling impossibility results for those who wish to reject longtermism so robustly. It turns out that, to do so, we must incur severe theoretical costs. I suspect that these costs are greater than simply accepting longtermism. If so, the more promising route to denying longtermism would be by appeal to descriptive facts.

Other working papers

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…

The freedom of future people – Andreas T Schmidt (University of Groningen)

What happens to liberal political philosophy, if we consider not only the freedom of present but also future people? In this article, I explore the case for long-term liberalism: freedom should be a central goal, and we should often be particularly concerned with effects on long-term future distributions of freedom. I provide three arguments. First, liberals should be long-term liberals: liberal arguments to value freedom give us reason to be (particularly) concerned with future freedom…

In defence of fanaticism – Hayden Wilkinson (Australian National University)

Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of a far better outcome (perhaps trillions of blissful lives created). Which is morally better? Expected value theory (with a plausible axiology) judges (2) as better, no matter how tiny its probability of success. But this seems fanatical. So we may be tempted to abandon expected value theory…