Social Beneficence

Jacob Barrett (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 11-2022

A background assumption in much contemporary political philosophy is that justice is the first virtue of social institutions, taking priority over other values such as beneficence. This assumption is typically treated as a methodological starting point, rather than as following from any particular moral or political theory. In this paper, I challenge this assumption. To frame my discussion, I argue, first, that justice doesn’t in principle override beneficence, and second, that justice doesn’t typically outweigh beneficence, since, in institutional contexts, the stakes of beneficence are often extremely high. While there are various ways one might resist this argument, none challenge the core methodological point that political philosophy should abandon its preoccupation with justice and begin to pay considerably more attention to social beneficence—that is, to beneficence understood as a virtue of social institutions. Along the way, I also highlight areas where focusing on social beneficence would lead political philosophers in new and fruitful directions, and where normative ethicists focused on personal beneficence might scale up their thinking to the institutional case.

Other working papers

It Only Takes One: The Psychology of Unilateral Decisions – Joshua Lewis (New York University) et al.

Sometimes, one decision can guarantee that a risky event will happen. For instance, it only took one team of researchers to synthesize and publish the horsepox genome, thus imposing its publication even though other researchers might have refrained for biosecurity reasons. We examine cases where everybody who can impose a given event has the same goal but different information about whether the event furthers that goal. …

Existential Risk and Growth – Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford) and Leopold Aschenbrenner

Technologies may pose existential risks to civilization. Though accelerating technological development may increase the risk of anthropogenic existential catastrophe per period in the short run, two considerations suggest that a sector-neutral acceleration decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, since a richer society is willing to sacrifice more for safety, optimal policy can yield an “existential risk Kuznets curve”; acceleration…

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.