How to neglect the long term
Hayden Wilkinson (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 11-2023
Consider longtermism: the view that, at least in some of the most important decisions facing agents today, which options are morally best is determined by which are best for the long-term future. Various critics have argued that longtermism is false—indeed, that it is obviously false, and that we can reject it on normative grounds without close consideration of certain descriptive facts. In effect, it is argued, longtermism would be false even if real-world agents had promising means of benefiting vast numbers of future people. In this paper, I develop a series of troubling impossibility results for those who wish to reject longtermism so robustly. It turns out that, to do so, we must incur severe theoretical costs. I suspect that these costs are greater than simply accepting longtermism. If so, the more promising route to denying longtermism would be by appeal to descriptive facts.
Other working papers
The case for strong longtermism – Hilary Greaves and William MacAskill (Global Priorities Institute, University of Oxford)
A striking fact about the history of civilisation is just how early we are in it. There are 5000 years of recorded history behind us, but how many years are still to come? If we merely last as long as the typical mammalian species…
Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)
Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…
Doomsday and objective chance – Teruji Thomas (Global Priorities Institute, Oxford University)
Lewis’s Principal Principle says that one should usually align one’s credences with the known chances. In this paper I develop a version of the Principal Principle that deals well with some exceptional cases related to the distinction between metaphysical and epistemic modality. I explain how this principle gives a unified account of the Sleeping Beauty problem and chance-based principles of anthropic reasoning…