Crying wolf: Warning about societal risks can be reputationally risky
Lucius Caviola (Global Priorities Institute University), Matthew Coleman (Northeastern University), Christoph Winter (ITAM & Harvard) and Joshua Lewis (New York University)
GPI Working Paper No. 15-2024
Society relies on expert warnings about large-scale risks like pandemics and natural disasters. Across ten studies (N = 5,342), we demonstrate people’s reluctance to warn about unlikely but large-scale risks because they are concerned about being blamed for being wrong. In particular, warners anticipate that if the risk doesn’t occur, they will be perceived as overly alarmist and responsible for wasting societal resources. This phenomenon appears in the context of natural, technological, and financial risks and in US and Chinese samples, local policymakers, AI researchers, and legal experts. The reluctance to warn is aggravated when the warner will be held epistemically responsible, such as when they are the only warner and when the risk is speculative, lacking objective evidence. A remedy is offering anonymous expert warning systems. Our studies emphasize the need for societal risk management policies to consider psychological biases and social incentives.
Other working papers
A Fission Problem for Person-Affecting Views – Elliott Thornley (Global Priorities Institute, University of Oxford)
On person-affecting views in population ethics, the moral import of a person’s welfare depends on that person’s temporal or modal status. These views typically imply that – all else equal – we’re never required to create extra people, or to act in ways that increase the probability of extra people coming into existence. In this paper, I use Parfit-style fission cases to construct a dilemma for person-affecting views: either they forfeit their seeming-advantages and face fission analogues…
How to resist the Fading Qualia Argument – Andreas Mogensen (Global Priorities Institute, University of Oxford)
The Fading Qualia Argument is perhaps the strongest argument supporting the view that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal parts have the right causal relations to each other and to the system’s inputs and outputs. I show how the argument can be resisted given two key assumptions: that consciousness is associated with vagueness at its boundaries and that conscious neural activity has a particular kind of holistic structure. …
Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)
This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…