Moral uncertainty and public justification

Jacob Barrett (Global Priorities Institute, University of Oxford) and Andreas T Schmidt (University of Groningen)

GPI Working Paper No. 15-2021, forthcoming at Philosophers' Imprint

Moral uncertainty and disagreement pervade our lives. Yet we still need to make decisions and act, both in individual and political contexts. So, what should we do? The moral uncertainty approach provides a theory of what individuals morally ought to do when they are uncertain about morality. Public reason liberals, in contrast, provide a theory of how societies should deal with reasonable disagreements about morality. They defend the public justification principle: state action is permissible only if it can be justified to all reasonable people. In this article, we bring these two approaches together. Specifically, we investigate whether the moral uncertainty approach supports public reason liberalism: given our own moral uncertainty, should we favor public justification? We argue that while the moral uncertainty approach cannot vindicate an exceptionless public justification principle, it gives us reason to adopt public justification as a pro tanto institutional commitment. Furthermore, it provides new answers to some intramural debates among public reason liberals and new responses to some common objections.

Other working papers

High risk, low reward: A challenge to the astronomical value of existential risk mitigation – David Thorstad (Global Priorities Institute, University of Oxford)

Many philosophers defend two claims: the astronomical value thesis that it is astronomically important to mitigate existential risks to humanity, and existential risk pessimism, the claim that humanity faces high levels of existential risk. It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true. Across a range of assumptions, existential risk pessimism significantly reduces the value of existential risk mitigation…

Respect for others’ risk attitudes and the long-run future – Andreas Mogensen (Global Priorities Institute, University of Oxford)

When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human survival. …

Doomsday rings twice – Andreas Mogensen (Global Priorities Institute, Oxford University)

This paper considers the argument according to which, because we should regard it as a priori very unlikely that we are among the most important people who will ever exist, we should increase our confidence that the human species will not persist beyond the current historical era, which seems to represent…