On two arguments for Fanaticism
Jeffrey Sanford Russell (University of Southern California)
GPI Working Paper No. 17-2021, published in Noûs
Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better. I consider two related recent arguments for Fanaticism: Beckstead and Thomas’s argument from strange dependence on space and time, and Wilkinson’s Indology argument. While both arguments are instructive, neither is persuasive. In fact, the general principles that underwrite the arguments (a separability principle in the first case, and a reflection principle in the second) are inconsistent with Fanaticism. In both cases, though, it is possible to rehabilitate arguments for Fanaticism based on restricted versions of those principles. The situation is unstable: plausible general principles tell against Fanaticism, but restrictions of those same principles (with strengthened auxiliary assumptions) support Fanaticism. All of the consistent views that emerge are very strange.
Other working papers
Can an evidentialist be risk-averse? – Hayden Wilkinson (Global Priorities Institute, University of Oxford)
Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent – that our answer to one bears little on our answer to the other. …
In defence of fanaticism – Hayden Wilkinson (Australian National University)
Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of a far better outcome (perhaps trillions of blissful lives created). Which is morally better? Expected value theory (with a plausible axiology) judges (2) as better, no matter how tiny its probability of success. But this seems fanatical. So we may be tempted to abandon expected value theory…
When should an effective altruist donate? – William MacAskill (Global Priorities Institute, Oxford University)
Effective altruism is the use of evidence and careful reasoning to work out how to maximize positive impact on others with a given unit of resources, and the taking of action on that basis. It’s a philosophy and a social movement that is gaining considerable steam in the philanthropic world. For example,…