What power-seeking theorems do not show

David Thorstad (Vanderbilt University)

GPI Working Paper No. 27-2024

Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.

Other working papers

A non-identity dilemma for person-affecting views – Elliott Thornley (Global Priorities Institute, University of Oxford)

Person-affecting views in population ethics state that (in cases where all else is equal) we’re permitted but not required to create people who would enjoy good lives. In this paper, I present an argument against every possible variety of person- affecting view. The argument takes the form of a dilemma. Narrow person-affecting views must embrace at least one of three implausible verdicts in a case that I call ‘Expanded Non- Identity.’ Wide person-affecting views run into trouble in a case that I call ‘Two-Shot Non-Identity.’ …

The Conservation Multiplier – Bård Harstad (University of Oslo)

Every government that controls an exhaustible resource must decide whether to exploit it or to conserve and thereby let the subsequent government decide whether to exploit or conserve. This paper develops a positive theory of this situation and shows when a small change in parameter values has a multiplier effect on exploitation. The multiplier strengthens the influence of a lobby paying for exploitation, and of a donor compensating for conservation. …

Longtermism in an Infinite World – Christian J. Tarsney (Population Wellbeing Initiative, University of Texas at Austin) and Hayden Wilkinson (Global Priorities Institute, University of Oxford)

The case for longtermism depends on the vast potential scale of the future. But that same vastness also threatens to undermine the case for longtermism: If the future contains infinite value, then many theories of value that support longtermism (e.g., risk-neutral total utilitarianism) seem to imply that no available action is better than any other. And some strategies for avoiding this conclusion (e.g., exponential time discounting) yield views that…