In defence of fanaticism

Hayden Wilkinson (Australian National University)

GPI Working Paper No. 4-2020, published in Ethics

Which is better: a guarantee of a modest amount of moral value, or a tiny probability of arbitrarily large value? To prefer the latter seems fanatical. But, as I argue, avoiding such fanaticism brings severe problems. To do so, we must (1) decline intuitively attractive trade-offs; (2) rank structurally identical pairs of lotteries inconsistently, or else admit absurd sensitivity to tiny probability differences;(3) have rankings depend on remote, unaffected events (including events in ancient Egypt); and often (4) neglect to rank lotteries as we already know we would if we learned more. Compared to these implications, fanaticism is highly plausible

Other working papers

Moral uncertainty and public justification – Jacob Barrett (Global Priorities Institute, University of Oxford) and Andreas T Schmidt (University of Groningen)

Moral uncertainty and disagreement pervade our lives. Yet we still need to make decisions and act, both in individual and political contexts. So, what should we do? The moral uncertainty approach provides a theory of what individuals morally ought to do when they are uncertain about morality…

Minimal and Expansive Longtermism – Hilary Greaves (University of Oxford) and Christian Tarsney (Population Wellbeing Initiative, University of Texas at Austin)

The standard case for longtermism focuses on a small set of risks to the far future, and argues that in a small set of choice situations, the present marginal value of mitigating those risks is very great. But many longtermists are attracted to, and many critics of longtermism worried by, a farther-reaching form of longtermism. According to this farther-reaching form, there are many ways of improving the far future, which determine the value of our options in all or nearly all choice situations…

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …