Economic growth under transformative AI
Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia, NBER and CEPR)
GPI Working Paper No. 8-2020, published in the National Bureau of Economic Research Working Paper series and forthcoming in the Annual Review of Economics
Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital's substitutability for labor or task automation, capturing the notion that AI will let capital “self-replicate”. This typically speeds up growth and lowers the labor share. We then consider models in which AI increases knowledge production, capturing the notion that AI will let capital “self-improve”, speeding growth further. Taken as a whole, the literature suggests that sufficiently advanced AI is likely to deliver both effects.
Other working papers
Crying wolf: Warning about societal risks can be reputationally risky – Lucius Caviola (Global Priorities Institute, University of Oxford) et al.
Society relies on expert warnings about large-scale risks like pandemics and natural disasters. Across ten studies (N = 5,342), we demonstrate people’s reluctance to warn about unlikely but large-scale risks because they are concerned about being blamed for being wrong. In particular, warners anticipate that if the risk doesn’t occur, they will be perceived as overly alarmist and responsible for wasting societal resources. This phenomenon appears in the context of natural, technological, and financial risks…
Longtermism in an Infinite World – Christian J. Tarsney (Population Wellbeing Initiative, University of Texas at Austin) and Hayden Wilkinson (Global Priorities Institute, University of Oxford)
The case for longtermism depends on the vast potential scale of the future. But that same vastness also threatens to undermine the case for longtermism: If the future contains infinite value, then many theories of value that support longtermism (e.g., risk-neutral total utilitarianism) seem to imply that no available action is better than any other. And some strategies for avoiding this conclusion (e.g., exponential time discounting) yield views that…
Choosing the future: Markets, ethics and rapprochement in social discounting – Antony Millner (University of California, Santa Barbara) and Geoffrey Heal (Columbia University)
This paper provides a critical review of the literature on choosing social discount rates (SDRs) for public cost-benefit analysis. We discuss two dominant approaches, the first based on market prices, and the second based on intertemporal ethics. While both methods have attractive features, neither is immune to criticism. …