It Only Takes One: The Psychology of Unilateral Decisions
Joshua Lewis (New York University), Carter Allen (UC Berkeley), Christoph Winter (ITAM, Harvard University and Institute for Law & AI) and Lucius Caviola (Global Priorities Institute, Oxford University)
GPI Working Paper No. 14-2024
Sometimes, one decision can guarantee that a risky event will happen. For instance, it only took one team of researchers to synthesize and publish the horsepox genome, thus imposing its publication even though other researchers might have refrained for biosecurity reasons. We examine cases where everybody who can impose a given event has the same goal but different information about whether the event furthers that goal. Across 8 experiments (including scenario studies with elected policymakers, doctors, artificial-intelligence researchers, and lawyers and judges and economic games with laypeople, N = 1,518, and 3 supplemental studies, N = 847) people behave suboptimally, balancing two factors. First, people often impose events with expected utility only slightly better than the alternative based on the information available to them, even when others might know more. This approach is insufficiently cautious, leading people to impose too frequently, a situation termed the unilateralist’s curse. Second, counteracting the first factor, people avoid sole responsibility for unexpectedly bad outcomes, sometimes declining to impose seemingly desirable events. The former heuristic typically dominates and people unilaterally impose too often, succumbing to the unilateralist’s curse. But when only few people can impose, who know the stakes are high, responsibility aversion reduces over-imposing.
Other working papers
A paradox for tiny probabilities and enormous values – Nick Beckstead (Open Philanthropy Project) and Teruji Thomas (Global Priorities Institute, Oxford University)
We show that every theory of the value of uncertain prospects must have one of three unpalatable properties. Reckless theories recommend risking arbitrarily great gains at arbitrarily long odds for the sake of enormous potential; timid theories recommend passing up arbitrarily great gains to prevent a tiny increase in risk; nontransitive theories deny the principle that, if A is better than B and B is better than C, then A must be better than C.
Doomsday rings twice – Andreas Mogensen (Global Priorities Institute, Oxford University)
This paper considers the argument according to which, because we should regard it as a priori very unlikely that we are among the most important people who will ever exist, we should increase our confidence that the human species will not persist beyond the current historical era, which seems to represent…
The unexpected value of the future – Hayden Wilkinson (Global Priorities Institute, University of Oxford)
Various philosophers accept moral views that are impartial, additive, and risk-neutral with respect to betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire reductio ad absurdum. If the expected sum of value in humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution—then those views say that no real-world option is ever better than any other. And, as I argue…