It Only Takes One: The Psychology of Unilateral Decisions

Joshua Lewis (New York University), Carter Allen (UC Berkeley), Christoph Winter (ITAM, Harvard University and Institute for Law & AI) and Lucius Caviola (Global Priorities Institute, Oxford University)

GPI Working Paper No. 14-2024

Sometimes, one decision can guarantee that a risky event will happen. For instance, it only took one team of researchers to synthesize and publish the horsepox genome, thus imposing its publication even though other researchers might have refrained for biosecurity reasons. We examine cases where everybody who can impose a given event has the same goal but different information about whether the event furthers that goal. Across 8 experiments (including scenario studies with elected policymakers, doctors, artificial-intelligence researchers, and lawyers and judges and economic games with laypeople, N = 1,518, and 3 supplemental studies, N = 847) people behave suboptimally, balancing two factors. First, people often impose events with expected utility only slightly better than the alternative based on the information available to them, even when others might know more. This approach is insufficiently cautious, leading people to impose too frequently, a situation termed the unilateralist’s curse. Second, counteracting the first factor, people avoid sole responsibility for unexpectedly bad outcomes, sometimes declining to impose seemingly desirable events. The former heuristic typically dominates and people unilaterally impose too often, succumbing to the unilateralist’s curse. But when only few people can impose, who know the stakes are high, responsibility aversion reduces over-imposing.

Other working papers

Estimating long-term treatment effects without long-term outcome data – David Rhys Bernard (Paris School of Economics)

Estimating long-term impacts of actions is important in many areas but the key difficulty is that long-term outcomes are only observed with a long delay. One alternative approach is to measure the effect on an intermediate outcome or a statistical surrogate and then use this to estimate the long-term effect. …

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…

Can an evidentialist be risk-averse? – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent – that our answer to one bears little on our answer to the other. …