It Only Takes One: The Psychology of Unilateral Decisions

Joshua Lewis (New York University), Carter Allen (UC Berkeley), Christoph Winter (ITAM, Harvard University and Institute for Law & AI) and Lucius Caviola (Global Priorities Institute, Oxford University)

GPI Working Paper No. 14-2024

Sometimes, one decision can guarantee that a risky event will happen. For instance, it only took one team of researchers to synthesize and publish the horsepox genome, thus imposing its publication even though other researchers might have refrained for biosecurity reasons. We examine cases where everybody who can impose a given event has the same goal but different information about whether the event furthers that goal. Across 8 experiments (including scenario studies with elected policymakers, doctors, artificial-intelligence researchers, and lawyers and judges and economic games with laypeople, N = 1,518, and 3 supplemental studies, N = 847) people behave suboptimally, balancing two factors. First, people often impose events with expected utility only slightly better than the alternative based on the information available to them, even when others might know more. This approach is insufficiently cautious, leading people to impose too frequently, a situation termed the unilateralist’s curse. Second, counteracting the first factor, people avoid sole responsibility for unexpectedly bad outcomes, sometimes declining to impose seemingly desirable events. The former heuristic typically dominates and people unilaterally impose too often, succumbing to the unilateralist’s curse. But when only few people can impose, who know the stakes are high, responsibility aversion reduces over-imposing.

Other working papers

How important is the end of humanity? Lay people prioritize extinction prevention but not above all other societal issues. – Matthew Coleman (Northeastern University), Lucius Caviola (Global Priorities Institute, University of Oxford) et al.

Human extinction would mean the deaths of eight billion people and the end of humanity’s achievements, culture, and future potential. On several ethical views, extinction would be a terrible outcome. How do people think about human extinction? And how much do they prioritize preventing extinction over other societal issues? Across six empirical studies (N = 2,541; U.S. and China) we find that people consider extinction prevention a global priority and deserving of greatly increased societal resources. …

A Fission Problem for Person-Affecting Views – Elliott Thornley (Global Priorities Institute, University of Oxford)

On person-affecting views in population ethics, the moral import of a person’s welfare depends on that person’s temporal or modal status. These views typically imply that – all else equal – we’re never required to create extra people, or to act in ways that increase the probability of extra people coming into existence. In this paper, I use Parfit-style fission cases to construct a dilemma for person-affecting views: either they forfeit their seeming-advantages and face fission analogues…

AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)

A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…