It Only Takes One: The Psychology of Unilateral Decisions

Joshua Lewis (New York University), Carter Allen (UC Berkeley), Christoph Winter (ITAM, Harvard University and Institute for Law & AI) and Lucius Caviola (Global Priorities Institute, Oxford University)

GPI Working Paper No. 14-2024

Sometimes, one decision can guarantee that a risky event will happen. For instance, it only took one team of researchers to synthesize and publish the horsepox genome, thus imposing its publication even though other researchers might have refrained for biosecurity reasons. We examine cases where everybody who can impose a given event has the same goal but different information about whether the event furthers that goal. Across 8 experiments (including scenario studies with elected policymakers, doctors, artificial-intelligence researchers, and lawyers and judges and economic games with laypeople, N = 1,518, and 3 supplemental studies, N = 847) people behave suboptimally, balancing two factors. First, people often impose events with expected utility only slightly better than the alternative based on the information available to them, even when others might know more. This approach is insufficiently cautious, leading people to impose too frequently, a situation termed the unilateralist’s curse. Second, counteracting the first factor, people avoid sole responsibility for unexpectedly bad outcomes, sometimes declining to impose seemingly desirable events. The former heuristic typically dominates and people unilaterally impose too often, succumbing to the unilateralist’s curse. But when only few people can impose, who know the stakes are high, responsibility aversion reduces over-imposing.

Other working papers

A Fission Problem for Person-Affecting Views – Elliott Thornley (Global Priorities Institute, University of Oxford)

On person-affecting views in population ethics, the moral import of a person’s welfare depends on that person’s temporal or modal status. These views typically imply that – all else equal – we’re never required to create extra people, or to act in ways that increase the probability of extra people coming into existence. In this paper, I use Parfit-style fission cases to construct a dilemma for person-affecting views: either they forfeit their seeming-advantages and face fission analogues…

The Hinge of History Hypothesis: Reply to MacAskill – Andreas Mogensen (Global Priorities Institute, University of Oxford)

Some believe that the current era is uniquely important with respect to how well the rest of human history goes. Following Parfit, call this the Hinge of History Hypothesis. Recently, MacAskill has argued that our era is actually very unlikely to be especially influential in the way asserted by the Hinge of History Hypothesis. I respond to MacAskill, pointing to important unresolved ambiguities in his proposed definition of what it means for a time to be influential and criticizing the two arguments…

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…