Crying wolf: Warning about societal risks can be reputationally risky

Lucius Caviola (Global Priorities Institute University), Matthew Coleman (Northeastern University), Christoph Winter (ITAM & Harvard) and Joshua Lewis (New York University)

GPI Working Paper No. 15-2024

Society relies on expert warnings about large-scale risks like pandemics and natural disasters. Across ten studies (N = 5,342), we demonstrate people’s reluctance to warn about unlikely but large-scale risks because they are concerned about being blamed for being wrong. In particular, warners anticipate that if the risk doesn’t occur, they will be perceived as overly alarmist and responsible for wasting societal resources. This phenomenon appears in the context of natural, technological, and financial risks and in US and Chinese samples, local policymakers, AI researchers, and legal experts. The reluctance to warn is aggravated when the warner will be held epistemically responsible, such as when they are the only warner and when the risk is speculative, lacking objective evidence. A remedy is offering anonymous expert warning systems. Our studies emphasize the need for societal risk management policies to consider psychological biases and social incentives.

Other working papers

Maximal cluelessness – Andreas Mogensen (Global Priorities Institute, Oxford University)

I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance…

How much should governments pay to prevent catastrophes? Longtermism’s limited role – Carl Shulman (Advisor, Open Philanthropy) and Elliott Thornley (Global Priorities Institute, University of Oxford)

Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. …

Time discounting, consistency and special obligations: a defence of Robust Temporalism – Harry R. Lloyd (Yale University)

This paper defends the claim that mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save – call this view Robust Temporalism. Although almost all other moral philosophers dismiss Robust Temporalism out of hand, I argue that it is prima facie intuitively plausible, and that it is analogous to a view about special obligations that many philosophers already accept…