In Defence of Moderation

Jacob Barrett (Vanderbilt University)

GPI Working Paper No. 32-2024

A decision theory is fanatical if it says that, for any sure thing of getting some finite amount of value, it would always be better to almost certainly get nothing while having some tiny probability (no matter how small) of getting sufficiently more finite value. Fanaticism is extremely counterintuitive; common sense requires a more moderate view. However, a recent slew of arguments purport to vindicate it, claiming that moderate alternatives to fanaticism are sometimes similarly counterintuitive, face a powerful continuum argument, and violate widely accepted synchronic and diachronic consistency conditions. In this paper, I defend moderation. I show that certain arguments for fanaticism raise trouble for some versions of moderation—but not for more plausible moderate approaches. Other arguments raise more general difficulties for moderates—but fanatics face these problems too. There is therefore little reason to doubt our commonsensical commitment to moderation, and we can rest easy not worrying too much about tiny probabilities of enormous value.

Other working papers

Once More, Without Feeling – Andreas Mogensen (Global Priorities Institute, University of Oxford)

I argue for a pluralist theory of moral standing, on which both welfare subjectivity and autonomy can confer moral status. I argue that autonomy doesn’t entail welfare subjectivity, but can ground moral standing in its absence. Although I highlight the existence of plausible views on which autonomy entails phenomenal consciousness, I primarily emphasize the need for philosophical debates about the relationship between phenomenal consciousness and moral standing to engage with neglected questions about the nature…

The freedom of future people – Andreas T Schmidt (University of Groningen)

What happens to liberal political philosophy, if we consider not only the freedom of present but also future people? In this article, I explore the case for long-term liberalism: freedom should be a central goal, and we should often be particularly concerned with effects on long-term future distributions of freedom. I provide three arguments. First, liberals should be long-term liberals: liberal arguments to value freedom give us reason to be (particularly) concerned with future freedom…

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …