Crying wolf: Warning about societal risks can be reputationally risky

Lucius Caviola (Global Priorities Institute University), Matthew Coleman (Northeastern University), Christoph Winter (ITAM & Harvard) and Joshua Lewis (New York University)

GPI Working Paper No. 15-2024

Society relies on expert warnings about large-scale risks like pandemics and natural disasters. Across ten studies (N = 5,342), we demonstrate people’s reluctance to warn about unlikely but large-scale risks because they are concerned about being blamed for being wrong. In particular, warners anticipate that if the risk doesn’t occur, they will be perceived as overly alarmist and responsible for wasting societal resources. This phenomenon appears in the context of natural, technological, and financial risks and in US and Chinese samples, local policymakers, AI researchers, and legal experts. The reluctance to warn is aggravated when the warner will be held epistemically responsible, such as when they are the only warner and when the risk is speculative, lacking objective evidence. A remedy is offering anonymous expert warning systems. Our studies emphasize the need for societal risk management policies to consider psychological biases and social incentives.

Other working papers

Existential Risk and Growth – Leopold Aschenbrenner and Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford)

Technology increases consumption but can create or mitigate existential risk to human civilization. Though accelerating technological development may increase the hazard rate (the risk of existential catastrophe per period) in the short run, two considerations suggest that acceleration typically decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, given a policy option to sacrifice consumption for safety, acceleration motivates greater sacrifices…

Cassandra’s Curse: A second tragedy of the commons – Philippe Colo (ETH Zurich)

This paper studies why scientific forecasts regarding exceptional or rare events generally fail to trigger adequate public response. I consider a game of contribution to a public bad. Prior to the game, I assume contributors receive non-verifiable expert advice regarding uncertain damages. In addition, I assume that the expert cares only about social welfare. Under mild assumptions, I show that no information transmission can happen at equilibrium when the number of contributors…

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.