How much should governments pay to prevent catastrophes? Longtermism's limited role

Carl Shulman (Advisor, Open Philanthropy) and Elliott Thornley (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 8-2024, forthcoming in Essays on Longtermism

Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. Standard cost-benefit analysis implies that governments should spend much more on reducing catastrophic risk. We argue that a government catastrophe policy guided by cost-benefit analysis should be the goal of longtermists in the political sphere. This policy would be democratically acceptable, and it would reduce existential risk by almost as much as a strong longtermist policy.

Other working papers

Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)

This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…

Training effective altruism – Sultan Mehmood, (New Economic School), Shaheen Naseer (Lahore School of Economics) and Daniel L. Chen (Toulouse School of Economics)

Our randomized controlled trial of Pakistan’s deputy ministers compares two schools of thought about how to cultivate prosociality. We find that training the utilitarian value of empathy results in a 0.4-0.6 standard deviation increase in altruism, cooperation, coordination, and teamwork. Field outcomes—orphanage visits, volunteering in impoverished schools, and blood donations—also roughly double. We find that treated ministers increased their mentalizing of others…

How important is the end of humanity? Lay people prioritize extinction prevention but not above all other societal issues. – Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford) et al.

Human extinction would mean the deaths of eight billion people and the end of humanity’s achievements, culture, and future potential. On several ethical views, extinction would be a terrible outcome. How do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across six empirical studies (N = 2,541; U.S. and China) we find that people consider extinction prevention a global priority and deserving of greatly increased societal resources. …