Once More, Without Feeling
Andreas Mogensen (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 2-2025
I argue for a pluralist theory of moral standing, on which both welfare subjectivity and autonomy can confer moral status. I argue that autonomy doesn’t entail welfare subjectivity, but can ground moral standing in its absence. Although I highlight the existence of plausible views on which autonomy entails phenomenal consciousness, I primarily emphasize the need for philosophical debates about the relationship between phenomenal consciousness and moral standing to engage with neglected questions about the nature of autonomy and its possible links to consciousness, especially if we’re to face up to the ethical challenges future AI systems may pose.
Other working papers
Economic inequality and the long-term future – Andreas T. Schmidt (University of Groningen) and Daan Juijn (CE Delft)
Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects…
AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)
Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …
A bargaining-theoretic approach to moral uncertainty – Owen Cotton-Barratt (Future of Humanity Institute, Oxford University), Hilary Greaves (Global Priorities Institute, Oxford University)
This paper explores a new approach to the problem of decision under relevant moral uncertainty. We treat the case of an agent making decisions in the face of moral uncertainty on the model of bargaining theory, as if the decision-making process were one of bargaining among different internal parts of the agent, with different parts committed to different moral theories. The resulting approach contrasts interestingly with the extant “maximise expected choiceworthiness”…