Economic growth under transformative AI

Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia, NBER and CEPR)

GPI Working Paper No. 8-2020, published in the National Bureau of Economic Research Working Paper series and forthcoming in the Annual Review of Economics

Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital's substitutability for labor or task automation, capturing the notion that AI will let capital “self-replicate”. This typically speeds up growth and lowers the labor share. We then consider models in which AI increases knowledge production, capturing the notion that AI will let capital “self-improve”, speeding growth further. Taken as a whole, the literature suggests that sufficiently advanced AI is likely to deliver both effects.

Other working papers

Three mistakes in the moral mathematics of existential risk – David Thorstad (Global Priorities Institute, University of Oxford)

Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to…

Are we living at the hinge of history? – William MacAskill (Global Priorities Institute, Oxford University)

In the final pages of On What Matters, Volume II, Derek Parfit comments: ‘We live during the hinge of history… If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period… What now matters most is that we avoid ending human history.’ This passage echoes Parfit’s comment, in Reasons and Persons, that ‘the next few centuries will be the most important in human history’. …

What power-seeking theorems do not show – David Thorstad (Vanderbilt University)

Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.