Tough enough? Robust satisficing as a decision norm for long-term policy analysis
Andreas Mogensen and David Thorstad (Global Priorities Institute, Oxford University)
GPI Working Paper No. 15-2020, published in Synthese
This paper aims to open a dialogue between philosophers working in decision theory and operations researchers and engineers whose research addresses the topic of decision making under deep uncertainty. Specifically, we assess the recommendation to follow a norm of robust satisficing when making decisions under deep uncertainty in the context of decision analyses that rely on the tools of Robust Decision Making developed by Robert Lempert and colleagues at RAND. We discuss decision-theoretic and voting-theoretic motivations for robust satisficing, then use these motivations to select among candidate formulations of the robust satisficing norm. We also discuss two challenges for robust satisficing: whether the norm might in fact derive its plausibility from an implicit appeal to probabilistic representations of uncertainty of the kind that deep uncertainty is supposed to preclude; and whether there is adequate justification for adopting a satisficing norm, as opposed to an optimizing norm that is sensitive to considerations of robustness.
Other working papers
Strong longtermism and the challenge from anti-aggregative moral views – Karri Heikkinen (University College London)
Greaves and MacAskill (2019) argue for strong longtermism, according to which, in a wide class of decision situations, the option that is ex ante best, and the one we ex ante ought to choose, is the option that makes the very long-run future go best. One important aspect of their argument is the claim that strong longtermism is compatible with a wide range of ethical assumptions, including plausible non-consequentialist views. In this essay, I challenge this claim…
The asymmetry, uncertainty, and the long term – Teruji Thomas (Global Priorities Institute, Oxford University)
The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing…
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…