Once More, Without Feeling
Andreas Mogensen (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 2-2025
I argue for a pluralist theory of moral standing, on which both welfare subjectivity and autonomy can confer moral status. I argue that autonomy doesn’t entail welfare subjectivity, but can ground moral standing in its absence. Although I highlight the existence of plausible views on which autonomy entails phenomenal consciousness, I primarily emphasize the need for philosophical debates about the relationship between phenomenal consciousness and moral standing to engage with neglected questions about the nature of autonomy and its possible links to consciousness, especially if we’re to face up to the ethical challenges future AI systems may pose.
Other working papers
Dynamic public good provision under time preference heterogeneity – Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford)
I explore the implications of time preference heterogeneity for the private funding of public goods. The assumption that players use a common discount rate is knife-edge: relaxing it yields substantially different equilibria, for two reasons. First, time preference heterogeneity motivates intertemporal polarization, analogous to the polarization seen in a static public good game. In the simplest settings, more patient players spend nothing early in time and less patient players spending nothing later. Second…
Moral uncertainty and public justification – Jacob Barrett (Global Priorities Institute, University of Oxford) and Andreas T Schmidt (University of Groningen)
Moral uncertainty and disagreement pervade our lives. Yet we still need to make decisions and act, both in individual and political contexts. So, what should we do? The moral uncertainty approach provides a theory of what individuals morally ought to do when they are uncertain about morality…
Towards shutdownable agents via stochastic choice – Elliott Thornley (Global Priorities Institute, University of Oxford), Alexander Roman (New College of Florida), Christos Ziakas (Independent), Leyton Ho (Brown University), and Louis Thomson (University of Oxford)
Some worry that advanced artificial agents may resist being shut down. The Incomplete Preferences Proposal (IPP) is an idea for ensuring that does not happen. A key part of the IPP is using a novel ‘Discounted Reward for Same-Length Trajectories (DReST)’ reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be ‘USEFUL’), and (2) choose stochastically between different trajectory-lengths (be ‘NEUTRAL’ about trajectory-lengths). In this paper, we propose…