Crying wolf: Warning about societal risks can be reputationally risky
Lucius Caviola (Global Priorities Institute University), Matthew Coleman (Northeastern University), Christoph Winter (ITAM & Harvard) and Joshua Lewis (New York University)
GPI Working Paper No. 15-2024
Society relies on expert warnings about large-scale risks like pandemics and natural disasters. Across ten studies (N = 5,342), we demonstrate people’s reluctance to warn about unlikely but large-scale risks because they are concerned about being blamed for being wrong. In particular, warners anticipate that if the risk doesn’t occur, they will be perceived as overly alarmist and responsible for wasting societal resources. This phenomenon appears in the context of natural, technological, and financial risks and in US and Chinese samples, local policymakers, AI researchers, and legal experts. The reluctance to warn is aggravated when the warner will be held epistemically responsible, such as when they are the only warner and when the risk is speculative, lacking objective evidence. A remedy is offering anonymous expert warning systems. Our studies emphasize the need for societal risk management policies to consider psychological biases and social incentives.
Other working papers
Aggregating Small Risks of Serious Harms – Tomi Francis (Global Priorities Institute, University of Oxford)
According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent…
Consequentialism, Cluelessness, Clumsiness, and Counterfactuals – Alan Hájek (Australian National University)
According to a standard statement of objective consequentialism, a morally right action is one that has the best consequences. More generally, given a choice between two actions, one is morally better than the other just in case the consequences of the former action are better than those of the latter. (These are not just the immediate consequences of the actions, but the long-term consequences, perhaps until the end of history.) This account glides easily off the tongue—so easily that…
The epistemic challenge to longtermism – Christian Tarsney (Global Priorities Institute, Oxford University)
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism…