Social Beneficence
Jacob Barrett (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 11-2022
A background assumption in much contemporary political philosophy is that justice is the first virtue of social institutions, taking priority over other values such as beneficence. This assumption is typically treated as a methodological starting point, rather than as following from any particular moral or political theory. In this paper, I challenge this assumption. To frame my discussion, I argue, first, that justice doesn’t in principle override beneficence, and second, that justice doesn’t typically outweigh beneficence, since, in institutional contexts, the stakes of beneficence are often extremely high. While there are various ways one might resist this argument, none challenge the core methodological point that political philosophy should abandon its preoccupation with justice and begin to pay considerably more attention to social beneficence—that is, to beneficence understood as a virtue of social institutions. Along the way, I also highlight areas where focusing on social beneficence would lead political philosophers in new and fruitful directions, and where normative ethicists focused on personal beneficence might scale up their thinking to the institutional case.
Other working papers
Desire-Fulfilment and Consciousness – Andreas Mogensen (Global Priorities Institute, University of Oxford)
I show that there are good reasons to think that some individuals without any capacity for consciousness should be counted as welfare subjects, assuming that desire-fulfilment is a welfare good and that any individuals who can accrue welfare goods are welfare subjects. While other philosophers have argued for similar conclusions, I show that they have done so by relying on a simplistic understanding of the desire-fulfilment theory. My argument is intended to be sensitive to the complexities and nuances of contemporary…
Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)
The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …
Dispelling the Anthropic Shadow – Teruji Thomas (Global Priorities Institute, University of Oxford)
There are some possible events that we could not possibly discover in our past. We could not discover an omnicidal catastrophe, an event so destructive that it permanently wiped out life on Earth. Had such a catastrophe occurred, we wouldn’t be here to find out. This space of unobservable histories has been called the anthropic shadow. Several authors claim that the anthropic shadow leads to an ‘observation selection bias’, analogous to survivorship bias, when we use the historical record to estimate catastrophic risks. …