Social Beneficence
Jacob Barrett (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 11-2022
A background assumption in much contemporary political philosophy is that justice is the first virtue of social institutions, taking priority over other values such as beneficence. This assumption is typically treated as a methodological starting point, rather than as following from any particular moral or political theory. In this paper, I challenge this assumption. To frame my discussion, I argue, first, that justice doesn’t in principle override beneficence, and second, that justice doesn’t typically outweigh beneficence, since, in institutional contexts, the stakes of beneficence are often extremely high. While there are various ways one might resist this argument, none challenge the core methodological point that political philosophy should abandon its preoccupation with justice and begin to pay considerably more attention to social beneficence—that is, to beneficence understood as a virtue of social institutions. Along the way, I also highlight areas where focusing on social beneficence would lead political philosophers in new and fruitful directions, and where normative ethicists focused on personal beneficence might scale up their thinking to the institutional case.
Other working papers
The Conservation Multiplier – Bård Harstad (University of Oslo)
Every government that controls an exhaustible resource must decide whether to exploit it or to conserve and thereby let the subsequent government decide whether to exploit or conserve. This paper develops a positive theory of this situation and shows when a small change in parameter values has a multiplier effect on exploitation. The multiplier strengthens the influence of a lobby paying for exploitation, and of a donor compensating for conservation. …
AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…
How to resist the Fading Qualia Argument – Andreas Mogensen (Global Priorities Institute, University of Oxford)
The Fading Qualia Argument is perhaps the strongest argument supporting the view that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system’s inputs and outputs. I show how the argument can be resisted given two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. …