A decision theory is fanatical if it says that, for any sure thing of getting some finite amount of value, it would always be better to almost certainly get nothing while having some tiny probability (no matter how small) of getting sufficiently more finite value. Fanaticism is extremely counterintuitive; common sense requires a more moderate view. However, a recent slew of arguments purport to vindicate it, claiming that moderate alternatives to fanaticism are sometimes similarly counterintuitive, face a powerful continuum argument, and violate widely accepted synchronic and diachronic consistency conditions. In this paper, I defend moderation. I show that certain arguments for fanaticism raise trouble for some versions of moderation—but not for more plausible moderate approaches. Other arguments raise more general difficulties for moderates—but fanatics face these problems too. There is therefore little reason to doubt our commonsensical commitment to moderation, and we can rest easy not worrying too much about tiny probabilities of enormous value.
Other working papers
Should longtermists recommend hastening extinction rather than delaying it? – Richard Pettigrew (University of Bristol)
Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our current resources, are those that focus on ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and improving the quality of those lives in that long future. The central argument for this conclusion is that, given a fixed amount of are source that we are able to devote to global priorities, the longtermist’s favoured interventions have…
How much should governments pay to prevent catastrophes? Longtermism’s limited role – Carl Shulman (Advisor, Open Philanthropy) and Elliott Thornley (Global Priorities Institute, University of Oxford)
Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. …
Critical-set views, biographical identity, and the long term – Elliott Thornley (Global Priorities Institute, University of Oxford)
Critical-set views avoid the Repugnant Conclusion by subtracting some constant from the welfare score of each life in a population. These views are thus sensitive to facts about biographical identity: identity between lives. In this paper, I argue that questions of biographical identity give us reason to reject critical-set views and embrace the total view. I end with a practical implication. If we shift our credences towards the total view, we should also shift our efforts towards ensuring that humanity survives for the long term.