AI takeover and human disempowerment

Adam Bales (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 9-2024, forthcoming in The Philosophical Quarterly

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? And what empirical claims must hold for the former to lead to the latter? In this paper, I address these questions, providing foundations for further evaluation of the likelihood of takeover.

Other working papers

Is In-kind Kinder than Cash? The Impact of Money vs Food Aid on Social Emotions and Aid Take-up – Samantha Kassirer, Ata Jami, & Maryam Kouchaki (Northwestern University)

There has been widespread endorsement from the academic and philanthropic communities on the new model of giving cash to those in need. Yet the recipient’s perspective has mostly been ignored. The present research explores how food-insecure individuals feel and respond when offered either monetary or food aid from a charity. Our results reveal that individuals are less likely to accept money than food aid from charity because receiving money feels relatively more shameful and relatively less socially positive. Since many…

Calibration dilemmas in the ethics of distribution – Jacob M. Nebel (University of Southern California) and H. Orri Stefánsson (Stockholm University and Swedish Collegium for Advanced Study)

This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several “calibration dilemmas,” in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities—e.g., inequalities in which half the population would gain an arbitrarily large quantity of well-being or resources…

The epistemic challenge to longtermism – Christian Tarsney (Global Priorities Institute, Oxford University)

Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism…