On two arguments for Fanaticism

Jeffrey Sanford Russell (University of Southern California)

GPI Working Paper No. 17-2021, published in Noûs

Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better. I consider two related recent arguments for Fanaticism: Beckstead and Thomas’s argument from strange dependence on space and time, and Wilkinson’s Indology argument. While both arguments are instructive, neither is persuasive. In fact, the general principles that underwrite the arguments (a separability principle in the first case, and a reflection principle in the second) are inconsistent with Fanaticism. In both cases, though, it is possible to rehabilitate arguments for Fanaticism based on restricted versions of those principles. The situation is unstable: plausible general principles tell against Fanaticism, but restrictions of those same principles (with strengthened auxiliary assumptions) support Fanaticism. All of the consistent views that emerge are very strange.

Other working papers

Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)

Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…

Tough enough? Robust satisficing as a decision norm for long-term policy analysis – Andreas Mogensen and David Thorstad (Global Priorities Institute, Oxford University)

This paper aims to open a dialogue between philosophers working in decision theory and operations researchers and engineers whose research addresses the topic of decision making under deep uncertainty. Specifically, we assess the recommendation to follow a norm of robust satisficing when making decisions under deep uncertainty in the context of decision analyses that rely on the tools of Robust Decision Making developed by Robert Lempert and colleagues at RAND …

Altruism in governance: Insights from randomized training – Sultan Mehmood, (New Economic School), Shaheen Naseer (Lahore School of Economics) and Daniel L. Chen (Toulouse School of Economics)

Randomizing different schools of thought in training altruism finds that training junior deputy ministers in the utility of empathy renders at least a 0.4 standard deviation increase in altruism. Treated ministers increased their perspective-taking: blood donations doubled, but only when blood banks requested their exact blood type. Perspective-taking in strategic dilemmas improved. Field measures such as orphanage visits and volunteering in impoverished schools also increased, as did their test scores in teamwork assessments…