Existential Risk and Growth

Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford) and Leopold Aschenbrenner

GPI Working Paper No. 13-2024

Technologies may pose existential risks to civilization. Though accelerating technological development may increase the risk of anthropogenic existential catastrophe per period in the short run, two considerations suggest that a sector-neutral acceleration decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, since a richer society is willing to sacrifice more for safety, optimal policy can yield an “existential risk Kuznets curve”; acceleration then pulls forward a future in which risk is low. Acceleration typically increases risk only given sufficiently extreme policy failures or direct contributions of acceleration to risk.

An earlier version of the paper was published as GPI Working Paper No. 6-2020, and is available here.

Other working papers

On two arguments for Fanaticism – Jeffrey Sanford Russell (University of Southern California)

Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better.

Moral uncertainty and public justification – Jacob Barrett (Global Priorities Institute, University of Oxford) and Andreas T Schmidt (University of Groningen)

Moral uncertainty and disagreement pervade our lives. Yet we still need to make decisions and act, both in individual and political contexts. So, what should we do? The moral uncertainty approach provides a theory of what individuals morally ought to do when they are uncertain about morality…

Future Suffering and the Non-Identity Problem – Theron Pummer (University of St Andrews)

I present and explore a new version of the Person-Affecting View, according to which reasons to do an act depend wholly on what would be said for or against this act from the points of view of particular individuals. According to my view, (i) there is a morally requiring reason not to bring about lives insofar as they contain suffering (negative welfare), (ii) there is no morally requiring reason to bring about lives insofar as they contain happiness (positive welfare), but (iii) there is a permitting reason to bring about lives insofar as they…