Existential Risk and Growth
Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford) and Leopold Aschenbrenner
GPI Working Paper No. 13-2024
Technologies may pose existential risks to civilization. Though accelerating technological development may increase the risk of anthropogenic existential catastrophe per period in the short run, two considerations suggest that a sector-neutral acceleration decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, since a richer society is willing to sacrifice more for safety, optimal policy can yield an “existential risk Kuznets curve”; acceleration then pulls forward a future in which risk is low. Acceleration typically increases risk only given sufficiently extreme policy failures or direct contributions of acceleration to risk.
An earlier version of the paper was published as GPI Working Paper No. 6-2020, and is available here.
Other working papers
Quadratic Funding with Incomplete Information – Luis M. V. Freitas (Global Priorities Institute, University of Oxford) and Wilfredo L. Maldonado (University of Sao Paulo)
Quadratic funding is a public good provision mechanism that satisfies desirable theoretical properties, such as efficiency under complete information, and has been gaining popularity in practical applications. We evaluate this mechanism in a setting of incomplete information regarding individual preferences, and show that this result only holds under knife-edge conditions. We also estimate the inefficiency of the mechanism in a variety of settings and show, in particular, that inefficiency increases…
A non-identity dilemma for person-affecting views – Elliott Thornley (Global Priorities Institute, University of Oxford)
Person-affecting views in population ethics state that (in cases where all else is equal) we’re permitted but not required to create people who would enjoy good lives. In this paper, I present an argument against every possible variety of person- affecting view. The argument takes the form of a dilemma. Narrow person-affecting views must embrace at least one of three implausible verdicts in a case that I call ‘Expanded Non- Identity.’ Wide person-affecting views run into trouble in a case that I call ‘Two-Shot Non-Identity.’ …
AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)
Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …