In defence of fanaticism
Hayden Wilkinson (Australian National University)
GPI Working Paper No. 4-2020, published in Ethics
Which is better: a guarantee of a modest amount of moral value, or a tiny probability of arbitrarily large value? To prefer the latter seems fanatical. But, as I argue, avoiding such fanaticism brings severe problems. To do so, we must (1) decline intuitively attractive trade-offs; (2) rank structurally identical pairs of lotteries inconsistently, or else admit absurd sensitivity to tiny probability differences;(3) have rankings depend on remote, unaffected events (including events in ancient Egypt); and often (4) neglect to rank lotteries as we already know we would if we learned more. Compared to these implications, fanaticism is highly plausible
Other working papers
AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…
The paralysis argument – William MacAskill, Andreas Mogensen (Global Priorities Institute, Oxford University)
Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives. We call this the Paralysis Argument. After laying out the argument, we consider and respond to…
Time Bias and Altruism – Leora Urim Sung (University College London)
We are typically near-future biased, being more concerned with our near future than our distant future. This near-future bias can be directed at others too, being more concerned with their near future than their distant future. In this paper, I argue that, because we discount the future in this way, beyond a certain point in time, we morally ought to be more concerned with the present well- being of others than with the well-being of our distant future selves. It follows that we morally ought to sacrifice…